00:00:00.001 Started by upstream project "autotest-per-patch" build number 126155 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.046 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.047 The recommended git tool is: git 00:00:00.047 using credential 00000000-0000-0000-0000-000000000002 00:00:00.049 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.087 Fetching changes from the remote Git repository 00:00:00.089 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.135 Using shallow fetch with depth 1 00:00:00.135 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.135 > git --version # timeout=10 00:00:00.171 > git --version # 'git version 2.39.2' 00:00:00.171 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.197 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.197 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.691 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.702 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.716 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:03.716 > git config core.sparsecheckout # timeout=10 00:00:03.727 > git read-tree -mu HEAD # timeout=10 00:00:03.744 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:03.764 Commit message: "inventory: add WCP3 to free inventory" 00:00:03.764 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.875 [Pipeline] Start of Pipeline 00:00:03.888 [Pipeline] library 00:00:03.889 Loading library shm_lib@master 00:00:03.889 Library shm_lib@master is cached. Copying from home. 00:00:03.907 [Pipeline] node 00:00:03.921 Running on CYP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.923 [Pipeline] { 00:00:03.937 [Pipeline] catchError 00:00:03.939 [Pipeline] { 00:00:03.956 [Pipeline] wrap 00:00:03.969 [Pipeline] { 00:00:03.978 [Pipeline] stage 00:00:03.980 [Pipeline] { (Prologue) 00:00:04.151 [Pipeline] sh 00:00:04.460 + logger -p user.info -t JENKINS-CI 00:00:04.480 [Pipeline] echo 00:00:04.481 Node: CYP11 00:00:04.490 [Pipeline] sh 00:00:04.827 [Pipeline] setCustomBuildProperty 00:00:04.843 [Pipeline] echo 00:00:04.844 Cleanup processes 00:00:04.850 [Pipeline] sh 00:00:05.135 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.135 341001 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.151 [Pipeline] sh 00:00:05.435 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.435 ++ grep -v 'sudo pgrep' 00:00:05.435 ++ awk '{print $1}' 00:00:05.435 + sudo kill -9 00:00:05.435 + true 00:00:05.453 [Pipeline] cleanWs 00:00:05.464 [WS-CLEANUP] Deleting project workspace... 00:00:05.464 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.472 [WS-CLEANUP] done 00:00:05.477 [Pipeline] setCustomBuildProperty 00:00:05.493 [Pipeline] sh 00:00:05.772 + sudo git config --global --replace-all safe.directory '*' 00:00:05.847 [Pipeline] httpRequest 00:00:05.890 [Pipeline] echo 00:00:05.891 Sorcerer 10.211.164.101 is alive 00:00:05.900 [Pipeline] httpRequest 00:00:05.905 HttpMethod: GET 00:00:05.906 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.906 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.909 Response Code: HTTP/1.1 200 OK 00:00:05.909 Success: Status code 200 is in the accepted range: 200,404 00:00:05.910 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.026 [Pipeline] sh 00:00:07.333 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.354 [Pipeline] httpRequest 00:00:07.393 [Pipeline] echo 00:00:07.394 Sorcerer 10.211.164.101 is alive 00:00:07.403 [Pipeline] httpRequest 00:00:07.407 HttpMethod: GET 00:00:07.408 URL: http://10.211.164.101/packages/spdk_a22f117fe5f0b0fdd392a07d6811ed9bd7a0a55f.tar.gz 00:00:07.408 Sending request to url: http://10.211.164.101/packages/spdk_a22f117fe5f0b0fdd392a07d6811ed9bd7a0a55f.tar.gz 00:00:07.429 Response Code: HTTP/1.1 200 OK 00:00:07.430 Success: Status code 200 is in the accepted range: 200,404 00:00:07.430 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a22f117fe5f0b0fdd392a07d6811ed9bd7a0a55f.tar.gz 00:01:55.870 [Pipeline] sh 00:01:56.168 + tar --no-same-owner -xf spdk_a22f117fe5f0b0fdd392a07d6811ed9bd7a0a55f.tar.gz 00:01:59.470 [Pipeline] sh 00:01:59.752 + git -C spdk log --oneline -n5 00:01:59.753 a22f117fe nvme/perf: Use sqthread_poll_cpu for io_uring workloads 00:01:59.753 719d03c6a sock/uring: only register net impl if supported 00:01:59.753 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:59.753 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:59.753 6c7c1f57e accel: add sequence outstanding stat 00:01:59.770 [Pipeline] } 00:01:59.790 [Pipeline] // stage 00:01:59.803 [Pipeline] stage 00:01:59.805 [Pipeline] { (Prepare) 00:01:59.829 [Pipeline] writeFile 00:01:59.848 [Pipeline] sh 00:02:00.131 + logger -p user.info -t JENKINS-CI 00:02:00.143 [Pipeline] sh 00:02:00.422 + logger -p user.info -t JENKINS-CI 00:02:00.437 [Pipeline] sh 00:02:00.725 + cat autorun-spdk.conf 00:02:00.725 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.725 SPDK_TEST_NVMF=1 00:02:00.725 SPDK_TEST_NVME_CLI=1 00:02:00.725 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:00.725 SPDK_TEST_NVMF_NICS=e810 00:02:00.725 SPDK_TEST_VFIOUSER=1 00:02:00.725 SPDK_RUN_UBSAN=1 00:02:00.725 NET_TYPE=phy 00:02:00.733 RUN_NIGHTLY=0 00:02:00.738 [Pipeline] readFile 00:02:00.769 [Pipeline] withEnv 00:02:00.772 [Pipeline] { 00:02:00.787 [Pipeline] sh 00:02:01.071 + set -ex 00:02:01.071 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:01.071 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:01.071 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.071 ++ SPDK_TEST_NVMF=1 00:02:01.071 ++ SPDK_TEST_NVME_CLI=1 00:02:01.071 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:01.071 ++ SPDK_TEST_NVMF_NICS=e810 00:02:01.071 ++ SPDK_TEST_VFIOUSER=1 00:02:01.071 ++ SPDK_RUN_UBSAN=1 00:02:01.071 ++ NET_TYPE=phy 00:02:01.071 ++ RUN_NIGHTLY=0 00:02:01.071 + case $SPDK_TEST_NVMF_NICS in 00:02:01.071 + DRIVERS=ice 00:02:01.071 + [[ tcp == \r\d\m\a ]] 00:02:01.071 + [[ -n ice ]] 00:02:01.071 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:01.071 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:01.071 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:01.071 rmmod: ERROR: Module irdma is not currently loaded 00:02:01.071 rmmod: ERROR: Module i40iw is not currently loaded 00:02:01.071 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:01.071 + true 00:02:01.071 + for D in $DRIVERS 00:02:01.071 + sudo modprobe ice 00:02:01.071 + exit 0 00:02:01.081 [Pipeline] } 00:02:01.101 [Pipeline] // withEnv 00:02:01.107 [Pipeline] } 00:02:01.127 [Pipeline] // stage 00:02:01.141 [Pipeline] catchError 00:02:01.144 [Pipeline] { 00:02:01.165 [Pipeline] timeout 00:02:01.165 Timeout set to expire in 50 min 00:02:01.167 [Pipeline] { 00:02:01.185 [Pipeline] stage 00:02:01.187 [Pipeline] { (Tests) 00:02:01.205 [Pipeline] sh 00:02:01.488 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:01.488 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:01.488 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:01.488 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:01.488 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:01.488 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:01.488 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:01.488 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:01.488 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:01.488 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:01.488 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:01.488 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:01.488 + source /etc/os-release 00:02:01.488 ++ NAME='Fedora Linux' 00:02:01.488 ++ VERSION='38 (Cloud Edition)' 00:02:01.488 ++ ID=fedora 00:02:01.488 ++ VERSION_ID=38 00:02:01.488 ++ VERSION_CODENAME= 00:02:01.488 ++ PLATFORM_ID=platform:f38 00:02:01.488 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:01.488 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:01.488 ++ LOGO=fedora-logo-icon 00:02:01.488 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:01.488 ++ HOME_URL=https://fedoraproject.org/ 00:02:01.488 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:01.488 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:01.488 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:01.488 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:01.488 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:01.488 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:01.488 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:01.488 ++ SUPPORT_END=2024-05-14 00:02:01.488 ++ VARIANT='Cloud Edition' 00:02:01.488 ++ VARIANT_ID=cloud 00:02:01.488 + uname -a 00:02:01.488 Linux spdk-cyp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:01.488 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:04.781 Hugepages 00:02:04.781 node hugesize free / total 00:02:04.781 node0 1048576kB 0 / 0 00:02:04.781 node0 2048kB 0 / 0 00:02:04.781 node1 1048576kB 0 / 0 00:02:04.781 node1 2048kB 0 / 0 00:02:04.781 00:02:04.781 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:04.781 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:04.781 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:04.781 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:04.781 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:04.781 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:04.781 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:04.781 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:04.781 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:05.042 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:05.042 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:05.042 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:05.042 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:05.042 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:05.042 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:05.042 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:05.042 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:05.042 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:05.042 + rm -f /tmp/spdk-ld-path 00:02:05.042 + source autorun-spdk.conf 00:02:05.042 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.042 ++ SPDK_TEST_NVMF=1 00:02:05.042 ++ SPDK_TEST_NVME_CLI=1 00:02:05.042 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.042 ++ SPDK_TEST_NVMF_NICS=e810 00:02:05.042 ++ SPDK_TEST_VFIOUSER=1 00:02:05.042 ++ SPDK_RUN_UBSAN=1 00:02:05.042 ++ NET_TYPE=phy 00:02:05.042 ++ RUN_NIGHTLY=0 00:02:05.042 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:05.042 + [[ -n '' ]] 00:02:05.042 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:05.042 + for M in /var/spdk/build-*-manifest.txt 00:02:05.042 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:05.042 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:05.042 + for M in /var/spdk/build-*-manifest.txt 00:02:05.042 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:05.042 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:05.042 ++ uname 00:02:05.042 + [[ Linux == \L\i\n\u\x ]] 00:02:05.042 + sudo dmesg -T 00:02:05.042 + sudo dmesg --clear 00:02:05.042 + dmesg_pid=342652 00:02:05.042 + [[ Fedora Linux == FreeBSD ]] 00:02:05.042 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.042 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.042 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:05.042 + [[ -x /usr/src/fio-static/fio ]] 00:02:05.042 + export FIO_BIN=/usr/src/fio-static/fio 00:02:05.042 + FIO_BIN=/usr/src/fio-static/fio 00:02:05.042 + sudo dmesg -Tw 00:02:05.042 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:05.042 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:05.042 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:05.042 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.042 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.042 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:05.042 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.042 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.042 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:05.042 Test configuration: 00:02:05.042 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.042 SPDK_TEST_NVMF=1 00:02:05.042 SPDK_TEST_NVME_CLI=1 00:02:05.042 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.042 SPDK_TEST_NVMF_NICS=e810 00:02:05.042 SPDK_TEST_VFIOUSER=1 00:02:05.042 SPDK_RUN_UBSAN=1 00:02:05.042 NET_TYPE=phy 00:02:05.042 RUN_NIGHTLY=0 09:10:52 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:05.042 09:10:52 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:05.042 09:10:52 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:05.042 09:10:52 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:05.042 09:10:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.042 09:10:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.042 09:10:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.042 09:10:52 -- paths/export.sh@5 -- $ export PATH 00:02:05.042 09:10:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.302 09:10:52 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:05.302 09:10:52 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:05.302 09:10:52 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721027452.XXXXXX 00:02:05.302 09:10:52 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721027452.cNpMxT 00:02:05.302 09:10:52 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:05.302 09:10:52 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:05.302 09:10:52 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:05.302 09:10:52 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:05.302 09:10:52 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:05.302 09:10:52 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:05.302 09:10:52 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:05.302 09:10:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.302 09:10:52 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:05.302 09:10:52 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:05.302 09:10:52 -- pm/common@17 -- $ local monitor 00:02:05.302 09:10:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.302 09:10:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.302 09:10:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.302 09:10:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.302 09:10:52 -- pm/common@21 -- $ date +%s 00:02:05.302 09:10:52 -- pm/common@25 -- $ sleep 1 00:02:05.303 09:10:52 -- pm/common@21 -- $ date +%s 00:02:05.303 09:10:52 -- pm/common@21 -- $ date +%s 00:02:05.303 09:10:52 -- pm/common@21 -- $ date +%s 00:02:05.303 09:10:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721027452 00:02:05.303 09:10:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721027452 00:02:05.303 09:10:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721027452 00:02:05.303 09:10:52 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721027452 00:02:05.303 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721027452_collect-vmstat.pm.log 00:02:05.303 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721027452_collect-cpu-load.pm.log 00:02:05.303 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721027452_collect-cpu-temp.pm.log 00:02:05.303 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721027452_collect-bmc-pm.bmc.pm.log 00:02:06.240 09:10:53 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:06.240 09:10:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:06.240 09:10:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:06.240 09:10:53 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:06.240 09:10:53 -- spdk/autobuild.sh@16 -- $ date -u 00:02:06.240 Mon Jul 15 07:10:53 AM UTC 2024 00:02:06.240 09:10:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:06.240 v24.09-pre-203-ga22f117fe 00:02:06.240 09:10:53 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:06.240 09:10:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:06.240 09:10:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:06.241 09:10:53 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:06.241 09:10:53 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:06.241 09:10:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.241 ************************************ 00:02:06.241 START TEST ubsan 00:02:06.241 ************************************ 00:02:06.241 09:10:53 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:06.241 using ubsan 00:02:06.241 00:02:06.241 real 0m0.000s 00:02:06.241 user 0m0.000s 00:02:06.241 sys 0m0.000s 00:02:06.241 09:10:53 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:06.241 09:10:53 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:06.241 ************************************ 00:02:06.241 END TEST ubsan 00:02:06.241 ************************************ 00:02:06.241 09:10:53 -- common/autotest_common.sh@1142 -- $ return 0 00:02:06.241 09:10:53 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:06.241 09:10:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:06.241 09:10:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:06.241 09:10:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:06.241 09:10:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:06.241 09:10:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:06.241 09:10:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:06.241 09:10:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:06.241 09:10:53 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:06.499 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:06.500 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:06.758 Using 'verbs' RDMA provider 00:02:22.622 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:34.847 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:34.847 Creating mk/config.mk...done. 00:02:34.847 Creating mk/cc.flags.mk...done. 00:02:34.847 Type 'make' to build. 00:02:34.847 09:11:21 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:02:34.847 09:11:21 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:34.847 09:11:21 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:34.847 09:11:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:34.847 ************************************ 00:02:34.847 START TEST make 00:02:34.847 ************************************ 00:02:34.847 09:11:21 make -- common/autotest_common.sh@1123 -- $ make -j144 00:02:34.847 make[1]: Nothing to be done for 'all'. 00:02:35.786 The Meson build system 00:02:35.786 Version: 1.3.1 00:02:35.786 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:35.786 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:35.786 Build type: native build 00:02:35.786 Project name: libvfio-user 00:02:35.786 Project version: 0.0.1 00:02:35.786 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:35.786 C linker for the host machine: cc ld.bfd 2.39-16 00:02:35.786 Host machine cpu family: x86_64 00:02:35.786 Host machine cpu: x86_64 00:02:35.786 Run-time dependency threads found: YES 00:02:35.786 Library dl found: YES 00:02:35.786 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:35.786 Run-time dependency json-c found: YES 0.17 00:02:35.786 Run-time dependency cmocka found: YES 1.1.7 00:02:35.786 Program pytest-3 found: NO 00:02:35.786 Program flake8 found: NO 00:02:35.786 Program misspell-fixer found: NO 00:02:35.786 Program restructuredtext-lint found: NO 00:02:35.786 Program valgrind found: YES (/usr/bin/valgrind) 00:02:35.786 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:35.786 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:35.786 Compiler for C supports arguments -Wwrite-strings: YES 00:02:35.786 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:35.786 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:35.786 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:35.786 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:35.786 Build targets in project: 8 00:02:35.786 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:35.787 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:35.787 00:02:35.787 libvfio-user 0.0.1 00:02:35.787 00:02:35.787 User defined options 00:02:35.787 buildtype : debug 00:02:35.787 default_library: shared 00:02:35.787 libdir : /usr/local/lib 00:02:35.787 00:02:35.787 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:36.044 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:36.044 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:36.044 [2/37] Compiling C object samples/null.p/null.c.o 00:02:36.044 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:36.301 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:36.301 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:36.301 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:36.301 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:36.301 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:36.301 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:36.301 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:36.301 [11/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:36.301 [12/37] Compiling C object samples/server.p/server.c.o 00:02:36.301 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:36.301 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:36.301 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:36.301 [16/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:36.301 [17/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:36.301 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:36.301 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:36.301 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:36.301 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:36.301 [22/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:36.301 [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:36.301 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:36.301 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:36.301 [26/37] Compiling C object samples/client.p/client.c.o 00:02:36.301 [27/37] Linking target samples/client 00:02:36.301 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:36.301 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:36.301 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:36.301 [31/37] Linking target test/unit_tests 00:02:36.560 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:36.560 [33/37] Linking target samples/shadow_ioeventfd_server 00:02:36.560 [34/37] Linking target samples/server 00:02:36.560 [35/37] Linking target samples/lspci 00:02:36.560 [36/37] Linking target samples/null 00:02:36.560 [37/37] Linking target samples/gpio-pci-idio-16 00:02:36.560 INFO: autodetecting backend as ninja 00:02:36.560 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:36.560 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:36.820 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:36.820 ninja: no work to do. 00:02:43.413 The Meson build system 00:02:43.413 Version: 1.3.1 00:02:43.413 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:43.413 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:43.413 Build type: native build 00:02:43.413 Program cat found: YES (/usr/bin/cat) 00:02:43.413 Project name: DPDK 00:02:43.413 Project version: 24.03.0 00:02:43.413 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:43.413 C linker for the host machine: cc ld.bfd 2.39-16 00:02:43.413 Host machine cpu family: x86_64 00:02:43.413 Host machine cpu: x86_64 00:02:43.413 Message: ## Building in Developer Mode ## 00:02:43.413 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:43.413 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:43.413 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:43.413 Program python3 found: YES (/usr/bin/python3) 00:02:43.413 Program cat found: YES (/usr/bin/cat) 00:02:43.413 Compiler for C supports arguments -march=native: YES 00:02:43.413 Checking for size of "void *" : 8 00:02:43.413 Checking for size of "void *" : 8 (cached) 00:02:43.413 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:43.413 Library m found: YES 00:02:43.413 Library numa found: YES 00:02:43.413 Has header "numaif.h" : YES 00:02:43.413 Library fdt found: NO 00:02:43.413 Library execinfo found: NO 00:02:43.413 Has header "execinfo.h" : YES 00:02:43.413 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:43.413 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:43.413 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:43.413 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:43.413 Run-time dependency openssl found: YES 3.0.9 00:02:43.413 Run-time dependency libpcap found: YES 1.10.4 00:02:43.413 Has header "pcap.h" with dependency libpcap: YES 00:02:43.413 Compiler for C supports arguments -Wcast-qual: YES 00:02:43.413 Compiler for C supports arguments -Wdeprecated: YES 00:02:43.413 Compiler for C supports arguments -Wformat: YES 00:02:43.413 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:43.413 Compiler for C supports arguments -Wformat-security: NO 00:02:43.413 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:43.413 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:43.413 Compiler for C supports arguments -Wnested-externs: YES 00:02:43.413 Compiler for C supports arguments -Wold-style-definition: YES 00:02:43.413 Compiler for C supports arguments -Wpointer-arith: YES 00:02:43.413 Compiler for C supports arguments -Wsign-compare: YES 00:02:43.413 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:43.413 Compiler for C supports arguments -Wundef: YES 00:02:43.413 Compiler for C supports arguments -Wwrite-strings: YES 00:02:43.413 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:43.413 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:43.413 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:43.413 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:43.413 Program objdump found: YES (/usr/bin/objdump) 00:02:43.413 Compiler for C supports arguments -mavx512f: YES 00:02:43.413 Checking if "AVX512 checking" compiles: YES 00:02:43.413 Fetching value of define "__SSE4_2__" : 1 00:02:43.413 Fetching value of define "__AES__" : 1 00:02:43.413 Fetching value of define "__AVX__" : 1 00:02:43.413 Fetching value of define "__AVX2__" : 1 00:02:43.413 Fetching value of define "__AVX512BW__" : 1 00:02:43.413 Fetching value of define "__AVX512CD__" : 1 00:02:43.413 Fetching value of define "__AVX512DQ__" : 1 00:02:43.413 Fetching value of define "__AVX512F__" : 1 00:02:43.413 Fetching value of define "__AVX512VL__" : 1 00:02:43.413 Fetching value of define "__PCLMUL__" : 1 00:02:43.413 Fetching value of define "__RDRND__" : 1 00:02:43.413 Fetching value of define "__RDSEED__" : 1 00:02:43.413 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:43.413 Fetching value of define "__znver1__" : (undefined) 00:02:43.413 Fetching value of define "__znver2__" : (undefined) 00:02:43.413 Fetching value of define "__znver3__" : (undefined) 00:02:43.413 Fetching value of define "__znver4__" : (undefined) 00:02:43.413 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:43.413 Message: lib/log: Defining dependency "log" 00:02:43.413 Message: lib/kvargs: Defining dependency "kvargs" 00:02:43.413 Message: lib/telemetry: Defining dependency "telemetry" 00:02:43.413 Checking for function "getentropy" : NO 00:02:43.413 Message: lib/eal: Defining dependency "eal" 00:02:43.413 Message: lib/ring: Defining dependency "ring" 00:02:43.413 Message: lib/rcu: Defining dependency "rcu" 00:02:43.413 Message: lib/mempool: Defining dependency "mempool" 00:02:43.413 Message: lib/mbuf: Defining dependency "mbuf" 00:02:43.413 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:43.413 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:43.413 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:43.413 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:43.413 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:43.413 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:43.413 Compiler for C supports arguments -mpclmul: YES 00:02:43.413 Compiler for C supports arguments -maes: YES 00:02:43.413 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:43.413 Compiler for C supports arguments -mavx512bw: YES 00:02:43.413 Compiler for C supports arguments -mavx512dq: YES 00:02:43.413 Compiler for C supports arguments -mavx512vl: YES 00:02:43.413 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:43.413 Compiler for C supports arguments -mavx2: YES 00:02:43.413 Compiler for C supports arguments -mavx: YES 00:02:43.413 Message: lib/net: Defining dependency "net" 00:02:43.413 Message: lib/meter: Defining dependency "meter" 00:02:43.413 Message: lib/ethdev: Defining dependency "ethdev" 00:02:43.413 Message: lib/pci: Defining dependency "pci" 00:02:43.413 Message: lib/cmdline: Defining dependency "cmdline" 00:02:43.413 Message: lib/hash: Defining dependency "hash" 00:02:43.413 Message: lib/timer: Defining dependency "timer" 00:02:43.413 Message: lib/compressdev: Defining dependency "compressdev" 00:02:43.413 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:43.413 Message: lib/dmadev: Defining dependency "dmadev" 00:02:43.413 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:43.413 Message: lib/power: Defining dependency "power" 00:02:43.413 Message: lib/reorder: Defining dependency "reorder" 00:02:43.414 Message: lib/security: Defining dependency "security" 00:02:43.414 Has header "linux/userfaultfd.h" : YES 00:02:43.414 Has header "linux/vduse.h" : YES 00:02:43.414 Message: lib/vhost: Defining dependency "vhost" 00:02:43.414 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:43.414 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:43.414 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:43.414 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:43.414 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:43.414 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:43.414 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:43.414 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:43.414 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:43.414 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:43.414 Program doxygen found: YES (/usr/bin/doxygen) 00:02:43.414 Configuring doxy-api-html.conf using configuration 00:02:43.414 Configuring doxy-api-man.conf using configuration 00:02:43.414 Program mandb found: YES (/usr/bin/mandb) 00:02:43.414 Program sphinx-build found: NO 00:02:43.414 Configuring rte_build_config.h using configuration 00:02:43.414 Message: 00:02:43.414 ================= 00:02:43.414 Applications Enabled 00:02:43.414 ================= 00:02:43.414 00:02:43.414 apps: 00:02:43.414 00:02:43.414 00:02:43.414 Message: 00:02:43.414 ================= 00:02:43.414 Libraries Enabled 00:02:43.414 ================= 00:02:43.414 00:02:43.414 libs: 00:02:43.414 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:43.414 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:43.414 cryptodev, dmadev, power, reorder, security, vhost, 00:02:43.414 00:02:43.414 Message: 00:02:43.414 =============== 00:02:43.414 Drivers Enabled 00:02:43.414 =============== 00:02:43.414 00:02:43.414 common: 00:02:43.414 00:02:43.414 bus: 00:02:43.414 pci, vdev, 00:02:43.414 mempool: 00:02:43.414 ring, 00:02:43.414 dma: 00:02:43.414 00:02:43.414 net: 00:02:43.414 00:02:43.414 crypto: 00:02:43.414 00:02:43.414 compress: 00:02:43.414 00:02:43.414 vdpa: 00:02:43.414 00:02:43.414 00:02:43.414 Message: 00:02:43.414 ================= 00:02:43.414 Content Skipped 00:02:43.414 ================= 00:02:43.414 00:02:43.414 apps: 00:02:43.414 dumpcap: explicitly disabled via build config 00:02:43.414 graph: explicitly disabled via build config 00:02:43.414 pdump: explicitly disabled via build config 00:02:43.414 proc-info: explicitly disabled via build config 00:02:43.414 test-acl: explicitly disabled via build config 00:02:43.414 test-bbdev: explicitly disabled via build config 00:02:43.414 test-cmdline: explicitly disabled via build config 00:02:43.414 test-compress-perf: explicitly disabled via build config 00:02:43.414 test-crypto-perf: explicitly disabled via build config 00:02:43.414 test-dma-perf: explicitly disabled via build config 00:02:43.414 test-eventdev: explicitly disabled via build config 00:02:43.414 test-fib: explicitly disabled via build config 00:02:43.414 test-flow-perf: explicitly disabled via build config 00:02:43.414 test-gpudev: explicitly disabled via build config 00:02:43.414 test-mldev: explicitly disabled via build config 00:02:43.414 test-pipeline: explicitly disabled via build config 00:02:43.414 test-pmd: explicitly disabled via build config 00:02:43.414 test-regex: explicitly disabled via build config 00:02:43.414 test-sad: explicitly disabled via build config 00:02:43.414 test-security-perf: explicitly disabled via build config 00:02:43.414 00:02:43.414 libs: 00:02:43.414 argparse: explicitly disabled via build config 00:02:43.414 metrics: explicitly disabled via build config 00:02:43.414 acl: explicitly disabled via build config 00:02:43.414 bbdev: explicitly disabled via build config 00:02:43.414 bitratestats: explicitly disabled via build config 00:02:43.414 bpf: explicitly disabled via build config 00:02:43.414 cfgfile: explicitly disabled via build config 00:02:43.414 distributor: explicitly disabled via build config 00:02:43.414 efd: explicitly disabled via build config 00:02:43.414 eventdev: explicitly disabled via build config 00:02:43.414 dispatcher: explicitly disabled via build config 00:02:43.414 gpudev: explicitly disabled via build config 00:02:43.414 gro: explicitly disabled via build config 00:02:43.414 gso: explicitly disabled via build config 00:02:43.414 ip_frag: explicitly disabled via build config 00:02:43.414 jobstats: explicitly disabled via build config 00:02:43.414 latencystats: explicitly disabled via build config 00:02:43.414 lpm: explicitly disabled via build config 00:02:43.414 member: explicitly disabled via build config 00:02:43.414 pcapng: explicitly disabled via build config 00:02:43.414 rawdev: explicitly disabled via build config 00:02:43.414 regexdev: explicitly disabled via build config 00:02:43.414 mldev: explicitly disabled via build config 00:02:43.414 rib: explicitly disabled via build config 00:02:43.414 sched: explicitly disabled via build config 00:02:43.414 stack: explicitly disabled via build config 00:02:43.414 ipsec: explicitly disabled via build config 00:02:43.414 pdcp: explicitly disabled via build config 00:02:43.414 fib: explicitly disabled via build config 00:02:43.414 port: explicitly disabled via build config 00:02:43.414 pdump: explicitly disabled via build config 00:02:43.414 table: explicitly disabled via build config 00:02:43.414 pipeline: explicitly disabled via build config 00:02:43.414 graph: explicitly disabled via build config 00:02:43.414 node: explicitly disabled via build config 00:02:43.414 00:02:43.414 drivers: 00:02:43.414 common/cpt: not in enabled drivers build config 00:02:43.414 common/dpaax: not in enabled drivers build config 00:02:43.414 common/iavf: not in enabled drivers build config 00:02:43.414 common/idpf: not in enabled drivers build config 00:02:43.414 common/ionic: not in enabled drivers build config 00:02:43.414 common/mvep: not in enabled drivers build config 00:02:43.414 common/octeontx: not in enabled drivers build config 00:02:43.414 bus/auxiliary: not in enabled drivers build config 00:02:43.414 bus/cdx: not in enabled drivers build config 00:02:43.414 bus/dpaa: not in enabled drivers build config 00:02:43.414 bus/fslmc: not in enabled drivers build config 00:02:43.414 bus/ifpga: not in enabled drivers build config 00:02:43.414 bus/platform: not in enabled drivers build config 00:02:43.414 bus/uacce: not in enabled drivers build config 00:02:43.414 bus/vmbus: not in enabled drivers build config 00:02:43.414 common/cnxk: not in enabled drivers build config 00:02:43.414 common/mlx5: not in enabled drivers build config 00:02:43.414 common/nfp: not in enabled drivers build config 00:02:43.414 common/nitrox: not in enabled drivers build config 00:02:43.414 common/qat: not in enabled drivers build config 00:02:43.414 common/sfc_efx: not in enabled drivers build config 00:02:43.414 mempool/bucket: not in enabled drivers build config 00:02:43.414 mempool/cnxk: not in enabled drivers build config 00:02:43.414 mempool/dpaa: not in enabled drivers build config 00:02:43.414 mempool/dpaa2: not in enabled drivers build config 00:02:43.414 mempool/octeontx: not in enabled drivers build config 00:02:43.414 mempool/stack: not in enabled drivers build config 00:02:43.414 dma/cnxk: not in enabled drivers build config 00:02:43.414 dma/dpaa: not in enabled drivers build config 00:02:43.414 dma/dpaa2: not in enabled drivers build config 00:02:43.414 dma/hisilicon: not in enabled drivers build config 00:02:43.414 dma/idxd: not in enabled drivers build config 00:02:43.414 dma/ioat: not in enabled drivers build config 00:02:43.414 dma/skeleton: not in enabled drivers build config 00:02:43.414 net/af_packet: not in enabled drivers build config 00:02:43.414 net/af_xdp: not in enabled drivers build config 00:02:43.414 net/ark: not in enabled drivers build config 00:02:43.414 net/atlantic: not in enabled drivers build config 00:02:43.414 net/avp: not in enabled drivers build config 00:02:43.414 net/axgbe: not in enabled drivers build config 00:02:43.414 net/bnx2x: not in enabled drivers build config 00:02:43.414 net/bnxt: not in enabled drivers build config 00:02:43.414 net/bonding: not in enabled drivers build config 00:02:43.414 net/cnxk: not in enabled drivers build config 00:02:43.414 net/cpfl: not in enabled drivers build config 00:02:43.414 net/cxgbe: not in enabled drivers build config 00:02:43.414 net/dpaa: not in enabled drivers build config 00:02:43.414 net/dpaa2: not in enabled drivers build config 00:02:43.414 net/e1000: not in enabled drivers build config 00:02:43.414 net/ena: not in enabled drivers build config 00:02:43.414 net/enetc: not in enabled drivers build config 00:02:43.414 net/enetfec: not in enabled drivers build config 00:02:43.414 net/enic: not in enabled drivers build config 00:02:43.414 net/failsafe: not in enabled drivers build config 00:02:43.414 net/fm10k: not in enabled drivers build config 00:02:43.414 net/gve: not in enabled drivers build config 00:02:43.414 net/hinic: not in enabled drivers build config 00:02:43.414 net/hns3: not in enabled drivers build config 00:02:43.414 net/i40e: not in enabled drivers build config 00:02:43.414 net/iavf: not in enabled drivers build config 00:02:43.414 net/ice: not in enabled drivers build config 00:02:43.414 net/idpf: not in enabled drivers build config 00:02:43.414 net/igc: not in enabled drivers build config 00:02:43.414 net/ionic: not in enabled drivers build config 00:02:43.414 net/ipn3ke: not in enabled drivers build config 00:02:43.414 net/ixgbe: not in enabled drivers build config 00:02:43.414 net/mana: not in enabled drivers build config 00:02:43.414 net/memif: not in enabled drivers build config 00:02:43.414 net/mlx4: not in enabled drivers build config 00:02:43.414 net/mlx5: not in enabled drivers build config 00:02:43.414 net/mvneta: not in enabled drivers build config 00:02:43.414 net/mvpp2: not in enabled drivers build config 00:02:43.414 net/netvsc: not in enabled drivers build config 00:02:43.414 net/nfb: not in enabled drivers build config 00:02:43.414 net/nfp: not in enabled drivers build config 00:02:43.414 net/ngbe: not in enabled drivers build config 00:02:43.414 net/null: not in enabled drivers build config 00:02:43.414 net/octeontx: not in enabled drivers build config 00:02:43.414 net/octeon_ep: not in enabled drivers build config 00:02:43.414 net/pcap: not in enabled drivers build config 00:02:43.414 net/pfe: not in enabled drivers build config 00:02:43.414 net/qede: not in enabled drivers build config 00:02:43.414 net/ring: not in enabled drivers build config 00:02:43.414 net/sfc: not in enabled drivers build config 00:02:43.414 net/softnic: not in enabled drivers build config 00:02:43.414 net/tap: not in enabled drivers build config 00:02:43.414 net/thunderx: not in enabled drivers build config 00:02:43.414 net/txgbe: not in enabled drivers build config 00:02:43.414 net/vdev_netvsc: not in enabled drivers build config 00:02:43.415 net/vhost: not in enabled drivers build config 00:02:43.415 net/virtio: not in enabled drivers build config 00:02:43.415 net/vmxnet3: not in enabled drivers build config 00:02:43.415 raw/*: missing internal dependency, "rawdev" 00:02:43.415 crypto/armv8: not in enabled drivers build config 00:02:43.415 crypto/bcmfs: not in enabled drivers build config 00:02:43.415 crypto/caam_jr: not in enabled drivers build config 00:02:43.415 crypto/ccp: not in enabled drivers build config 00:02:43.415 crypto/cnxk: not in enabled drivers build config 00:02:43.415 crypto/dpaa_sec: not in enabled drivers build config 00:02:43.415 crypto/dpaa2_sec: not in enabled drivers build config 00:02:43.415 crypto/ipsec_mb: not in enabled drivers build config 00:02:43.415 crypto/mlx5: not in enabled drivers build config 00:02:43.415 crypto/mvsam: not in enabled drivers build config 00:02:43.415 crypto/nitrox: not in enabled drivers build config 00:02:43.415 crypto/null: not in enabled drivers build config 00:02:43.415 crypto/octeontx: not in enabled drivers build config 00:02:43.415 crypto/openssl: not in enabled drivers build config 00:02:43.415 crypto/scheduler: not in enabled drivers build config 00:02:43.415 crypto/uadk: not in enabled drivers build config 00:02:43.415 crypto/virtio: not in enabled drivers build config 00:02:43.415 compress/isal: not in enabled drivers build config 00:02:43.415 compress/mlx5: not in enabled drivers build config 00:02:43.415 compress/nitrox: not in enabled drivers build config 00:02:43.415 compress/octeontx: not in enabled drivers build config 00:02:43.415 compress/zlib: not in enabled drivers build config 00:02:43.415 regex/*: missing internal dependency, "regexdev" 00:02:43.415 ml/*: missing internal dependency, "mldev" 00:02:43.415 vdpa/ifc: not in enabled drivers build config 00:02:43.415 vdpa/mlx5: not in enabled drivers build config 00:02:43.415 vdpa/nfp: not in enabled drivers build config 00:02:43.415 vdpa/sfc: not in enabled drivers build config 00:02:43.415 event/*: missing internal dependency, "eventdev" 00:02:43.415 baseband/*: missing internal dependency, "bbdev" 00:02:43.415 gpu/*: missing internal dependency, "gpudev" 00:02:43.415 00:02:43.415 00:02:43.415 Build targets in project: 84 00:02:43.415 00:02:43.415 DPDK 24.03.0 00:02:43.415 00:02:43.415 User defined options 00:02:43.415 buildtype : debug 00:02:43.415 default_library : shared 00:02:43.415 libdir : lib 00:02:43.415 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:43.415 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:43.415 c_link_args : 00:02:43.415 cpu_instruction_set: native 00:02:43.415 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:43.415 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:43.415 enable_docs : false 00:02:43.415 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:43.415 enable_kmods : false 00:02:43.415 max_lcores : 128 00:02:43.415 tests : false 00:02:43.415 00:02:43.415 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:43.415 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:43.415 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:43.415 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:43.415 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:43.415 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:43.415 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:43.415 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:43.675 [7/267] Linking static target lib/librte_kvargs.a 00:02:43.675 [8/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:43.675 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:43.675 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:43.675 [11/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:43.675 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:43.675 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:43.675 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:43.675 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:43.675 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:43.675 [17/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:43.675 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:43.675 [19/267] Linking static target lib/librte_log.a 00:02:43.675 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:43.675 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:43.675 [22/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:43.675 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:43.675 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:43.675 [25/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:43.675 [26/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:43.675 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:43.675 [28/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:43.675 [29/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:43.675 [30/267] Linking static target lib/librte_pci.a 00:02:43.675 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:43.675 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:43.933 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:43.933 [34/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:43.933 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:43.933 [36/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:43.933 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:43.933 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:43.933 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:43.933 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.933 [41/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:43.933 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:43.933 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:43.933 [44/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:43.933 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:43.933 [46/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.933 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:43.933 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:43.933 [49/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:43.933 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:43.933 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:43.933 [52/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:43.933 [53/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:43.933 [54/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:43.933 [55/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:43.933 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:43.933 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:43.933 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:44.192 [59/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:44.192 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:44.192 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:44.192 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:44.192 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:44.192 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:44.192 [65/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:44.192 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:44.192 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:44.192 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:44.192 [69/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:44.192 [70/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:44.192 [71/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:44.192 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:44.192 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:44.192 [74/267] Linking static target lib/librte_meter.a 00:02:44.192 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:44.192 [76/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:44.192 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:44.192 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:44.192 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:44.192 [80/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:44.192 [81/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:44.192 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:44.192 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:44.192 [84/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:44.192 [85/267] Linking static target lib/librte_timer.a 00:02:44.192 [86/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:44.192 [87/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:44.192 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:44.192 [89/267] Linking static target lib/librte_telemetry.a 00:02:44.192 [90/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:44.192 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:44.192 [92/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:44.192 [93/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:44.192 [94/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:44.192 [95/267] Linking static target lib/librte_ring.a 00:02:44.192 [96/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:44.192 [97/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:44.192 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:44.192 [99/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:44.193 [100/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:44.193 [101/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:44.193 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:44.193 [103/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:44.193 [104/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:44.193 [105/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:44.193 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:44.193 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:44.193 [108/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:44.193 [109/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:44.193 [110/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:44.193 [111/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:44.193 [112/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:44.193 [113/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:44.193 [114/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:44.193 [115/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:44.193 [116/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:44.193 [117/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:44.193 [118/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:44.193 [119/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:44.193 [120/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:44.193 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:44.193 [122/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:44.193 [123/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.193 [124/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:44.193 [125/267] Linking static target lib/librte_cmdline.a 00:02:44.193 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:44.193 [127/267] Linking static target lib/librte_dmadev.a 00:02:44.193 [128/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:44.193 [129/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:44.193 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:44.193 [131/267] Linking static target lib/librte_mempool.a 00:02:44.193 [132/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:44.193 [133/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:44.193 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:44.193 [135/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:44.193 [136/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:44.193 [137/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:44.193 [138/267] Linking static target lib/librte_security.a 00:02:44.193 [139/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:44.193 [140/267] Linking static target lib/librte_rcu.a 00:02:44.193 [141/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:44.193 [142/267] Linking target lib/librte_log.so.24.1 00:02:44.193 [143/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:44.193 [144/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:44.193 [145/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:44.193 [146/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:44.193 [147/267] Linking static target lib/librte_compressdev.a 00:02:44.193 [148/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:44.193 [149/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:44.193 [150/267] Linking static target lib/librte_reorder.a 00:02:44.193 [151/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:44.193 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:44.193 [153/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:44.193 [154/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:44.193 [155/267] Linking static target lib/librte_net.a 00:02:44.193 [156/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:44.193 [157/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:44.193 [158/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:44.193 [159/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:44.193 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:44.193 [161/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:44.193 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:44.193 [163/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:44.193 [164/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:44.453 [165/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:44.453 [166/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:44.453 [167/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:44.453 [168/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:44.453 [169/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:44.453 [170/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:44.453 [171/267] Linking static target lib/librte_mbuf.a 00:02:44.453 [172/267] Linking static target lib/librte_power.a 00:02:44.453 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:44.453 [174/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.453 [175/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:44.453 [176/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:44.453 [177/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.453 [178/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.453 [179/267] Linking target lib/librte_kvargs.so.24.1 00:02:44.453 [180/267] Linking static target drivers/librte_bus_vdev.a 00:02:44.453 [181/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:44.453 [182/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:44.453 [183/267] Linking static target lib/librte_eal.a 00:02:44.453 [184/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:44.453 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:44.453 [186/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:44.453 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:44.453 [188/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:44.453 [189/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:44.453 [190/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:44.453 [191/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.453 [192/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:44.453 [193/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:44.453 [194/267] Linking static target lib/librte_hash.a 00:02:44.453 [195/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.714 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:44.714 [197/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:44.714 [198/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:44.714 [199/267] Linking static target lib/librte_cryptodev.a 00:02:44.714 [200/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:44.714 [201/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.714 [202/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.714 [203/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.714 [204/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.714 [205/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.714 [206/267] Linking static target drivers/librte_bus_pci.a 00:02:44.714 [207/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.714 [208/267] Linking static target drivers/librte_mempool_ring.a 00:02:44.714 [209/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.714 [210/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.714 [211/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.714 [212/267] Linking target lib/librte_telemetry.so.24.1 00:02:44.714 [213/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:44.714 [214/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.975 [215/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:44.975 [216/267] Linking static target lib/librte_ethdev.a 00:02:44.975 [217/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:44.975 [218/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.975 [219/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.975 [220/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:45.235 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.235 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.235 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.495 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.495 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.495 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.755 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:45.755 [228/267] Linking static target lib/librte_vhost.a 00:02:46.697 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.081 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.689 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.071 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.071 [233/267] Linking target lib/librte_eal.so.24.1 00:02:56.071 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:56.329 [235/267] Linking target lib/librte_ring.so.24.1 00:02:56.329 [236/267] Linking target lib/librte_pci.so.24.1 00:02:56.329 [237/267] Linking target lib/librte_meter.so.24.1 00:02:56.329 [238/267] Linking target lib/librte_timer.so.24.1 00:02:56.329 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:56.329 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:56.329 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:56.329 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:56.329 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:56.329 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:56.329 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:56.329 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:56.329 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:56.329 [248/267] Linking target lib/librte_mempool.so.24.1 00:02:56.588 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:56.588 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:56.588 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:56.588 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:56.588 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:56.848 [254/267] Linking target lib/librte_reorder.so.24.1 00:02:56.848 [255/267] Linking target lib/librte_compressdev.so.24.1 00:02:56.848 [256/267] Linking target lib/librte_net.so.24.1 00:02:56.848 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:56.848 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:56.848 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:56.848 [260/267] Linking target lib/librte_hash.so.24.1 00:02:56.848 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:57.108 [262/267] Linking target lib/librte_ethdev.so.24.1 00:02:57.108 [263/267] Linking target lib/librte_security.so.24.1 00:02:57.108 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:57.108 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:57.108 [266/267] Linking target lib/librte_power.so.24.1 00:02:57.108 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:57.108 INFO: autodetecting backend as ninja 00:02:57.108 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:58.494 CC lib/ut_mock/mock.o 00:02:58.494 CC lib/ut/ut.o 00:02:58.494 CC lib/log/log.o 00:02:58.494 CC lib/log/log_deprecated.o 00:02:58.494 CC lib/log/log_flags.o 00:02:58.494 LIB libspdk_ut_mock.a 00:02:58.494 LIB libspdk_log.a 00:02:58.494 LIB libspdk_ut.a 00:02:58.494 SO libspdk_ut.so.2.0 00:02:58.494 SO libspdk_ut_mock.so.6.0 00:02:58.494 SO libspdk_log.so.7.0 00:02:58.494 SYMLINK libspdk_ut.so 00:02:58.494 SYMLINK libspdk_ut_mock.so 00:02:58.494 SYMLINK libspdk_log.so 00:02:58.756 CC lib/dma/dma.o 00:02:58.756 CC lib/util/base64.o 00:02:58.756 CC lib/ioat/ioat.o 00:02:58.756 CC lib/util/bit_array.o 00:02:58.756 CC lib/util/cpuset.o 00:02:58.756 CC lib/util/crc16.o 00:02:58.756 CC lib/util/crc32c.o 00:02:58.756 CC lib/util/crc32.o 00:02:58.756 CC lib/util/crc32_ieee.o 00:02:58.756 CC lib/util/crc64.o 00:02:58.756 CC lib/util/dif.o 00:02:58.756 CC lib/util/fd.o 00:02:58.756 CXX lib/trace_parser/trace.o 00:02:58.756 CC lib/util/file.o 00:02:58.756 CC lib/util/hexlify.o 00:02:58.756 CC lib/util/iov.o 00:02:58.756 CC lib/util/math.o 00:02:58.756 CC lib/util/pipe.o 00:02:58.756 CC lib/util/strerror_tls.o 00:02:58.756 CC lib/util/string.o 00:02:58.756 CC lib/util/uuid.o 00:02:58.756 CC lib/util/fd_group.o 00:02:59.023 CC lib/util/xor.o 00:02:59.023 CC lib/util/zipf.o 00:02:59.023 CC lib/vfio_user/host/vfio_user_pci.o 00:02:59.023 CC lib/vfio_user/host/vfio_user.o 00:02:59.023 LIB libspdk_dma.a 00:02:59.023 SO libspdk_dma.so.4.0 00:02:59.369 LIB libspdk_ioat.a 00:02:59.369 SYMLINK libspdk_dma.so 00:02:59.369 SO libspdk_ioat.so.7.0 00:02:59.369 SYMLINK libspdk_ioat.so 00:02:59.369 LIB libspdk_vfio_user.a 00:02:59.369 SO libspdk_vfio_user.so.5.0 00:02:59.369 LIB libspdk_util.a 00:02:59.369 SYMLINK libspdk_vfio_user.so 00:02:59.369 SO libspdk_util.so.9.1 00:02:59.658 SYMLINK libspdk_util.so 00:02:59.658 LIB libspdk_trace_parser.a 00:02:59.658 SO libspdk_trace_parser.so.5.0 00:02:59.919 SYMLINK libspdk_trace_parser.so 00:02:59.919 CC lib/env_dpdk/env.o 00:02:59.919 CC lib/env_dpdk/memory.o 00:02:59.919 CC lib/env_dpdk/pci.o 00:02:59.919 CC lib/env_dpdk/init.o 00:02:59.919 CC lib/env_dpdk/threads.o 00:02:59.919 CC lib/env_dpdk/pci_ioat.o 00:02:59.919 CC lib/env_dpdk/pci_virtio.o 00:02:59.919 CC lib/env_dpdk/pci_idxd.o 00:02:59.919 CC lib/env_dpdk/pci_vmd.o 00:02:59.919 CC lib/env_dpdk/pci_event.o 00:02:59.919 CC lib/env_dpdk/sigbus_handler.o 00:02:59.919 CC lib/env_dpdk/pci_dpdk.o 00:02:59.919 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:59.919 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:59.919 CC lib/rdma_provider/common.o 00:02:59.919 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:59.919 CC lib/rdma_utils/rdma_utils.o 00:02:59.919 CC lib/idxd/idxd_user.o 00:02:59.919 CC lib/idxd/idxd.o 00:02:59.919 CC lib/vmd/vmd.o 00:02:59.919 CC lib/vmd/led.o 00:02:59.919 CC lib/idxd/idxd_kernel.o 00:02:59.919 CC lib/json/json_parse.o 00:02:59.919 CC lib/json/json_util.o 00:02:59.919 CC lib/conf/conf.o 00:02:59.919 CC lib/json/json_write.o 00:03:00.180 LIB libspdk_rdma_provider.a 00:03:00.180 SO libspdk_rdma_provider.so.6.0 00:03:00.180 LIB libspdk_conf.a 00:03:00.180 LIB libspdk_rdma_utils.a 00:03:00.180 SO libspdk_conf.so.6.0 00:03:00.180 SYMLINK libspdk_rdma_provider.so 00:03:00.180 LIB libspdk_json.a 00:03:00.180 SO libspdk_rdma_utils.so.1.0 00:03:00.180 SO libspdk_json.so.6.0 00:03:00.180 SYMLINK libspdk_conf.so 00:03:00.441 SYMLINK libspdk_rdma_utils.so 00:03:00.441 SYMLINK libspdk_json.so 00:03:00.441 LIB libspdk_idxd.a 00:03:00.441 SO libspdk_idxd.so.12.0 00:03:00.441 LIB libspdk_vmd.a 00:03:00.441 SO libspdk_vmd.so.6.0 00:03:00.702 SYMLINK libspdk_idxd.so 00:03:00.702 SYMLINK libspdk_vmd.so 00:03:00.702 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:00.702 CC lib/jsonrpc/jsonrpc_server.o 00:03:00.702 CC lib/jsonrpc/jsonrpc_client.o 00:03:00.702 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:00.964 LIB libspdk_jsonrpc.a 00:03:00.964 SO libspdk_jsonrpc.so.6.0 00:03:01.225 SYMLINK libspdk_jsonrpc.so 00:03:01.225 LIB libspdk_env_dpdk.a 00:03:01.225 SO libspdk_env_dpdk.so.14.1 00:03:01.486 SYMLINK libspdk_env_dpdk.so 00:03:01.486 CC lib/rpc/rpc.o 00:03:01.746 LIB libspdk_rpc.a 00:03:01.747 SO libspdk_rpc.so.6.0 00:03:01.747 SYMLINK libspdk_rpc.so 00:03:02.008 CC lib/trace/trace_flags.o 00:03:02.008 CC lib/trace/trace.o 00:03:02.008 CC lib/notify/notify.o 00:03:02.008 CC lib/keyring/keyring.o 00:03:02.008 CC lib/trace/trace_rpc.o 00:03:02.008 CC lib/notify/notify_rpc.o 00:03:02.008 CC lib/keyring/keyring_rpc.o 00:03:02.269 LIB libspdk_notify.a 00:03:02.269 SO libspdk_notify.so.6.0 00:03:02.269 LIB libspdk_trace.a 00:03:02.269 LIB libspdk_keyring.a 00:03:02.269 SYMLINK libspdk_notify.so 00:03:02.530 SO libspdk_keyring.so.1.0 00:03:02.530 SO libspdk_trace.so.10.0 00:03:02.530 SYMLINK libspdk_keyring.so 00:03:02.530 SYMLINK libspdk_trace.so 00:03:02.790 CC lib/thread/thread.o 00:03:02.790 CC lib/thread/iobuf.o 00:03:02.790 CC lib/sock/sock.o 00:03:02.790 CC lib/sock/sock_rpc.o 00:03:03.052 LIB libspdk_sock.a 00:03:03.313 SO libspdk_sock.so.10.0 00:03:03.313 SYMLINK libspdk_sock.so 00:03:03.573 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:03.573 CC lib/nvme/nvme_ctrlr.o 00:03:03.573 CC lib/nvme/nvme_fabric.o 00:03:03.573 CC lib/nvme/nvme_ns_cmd.o 00:03:03.573 CC lib/nvme/nvme_ns.o 00:03:03.573 CC lib/nvme/nvme_pcie_common.o 00:03:03.573 CC lib/nvme/nvme_pcie.o 00:03:03.573 CC lib/nvme/nvme_qpair.o 00:03:03.573 CC lib/nvme/nvme.o 00:03:03.573 CC lib/nvme/nvme_quirks.o 00:03:03.573 CC lib/nvme/nvme_transport.o 00:03:03.573 CC lib/nvme/nvme_discovery.o 00:03:03.573 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:03.573 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:03.573 CC lib/nvme/nvme_tcp.o 00:03:03.573 CC lib/nvme/nvme_opal.o 00:03:03.573 CC lib/nvme/nvme_io_msg.o 00:03:03.573 CC lib/nvme/nvme_poll_group.o 00:03:03.573 CC lib/nvme/nvme_zns.o 00:03:03.573 CC lib/nvme/nvme_stubs.o 00:03:03.573 CC lib/nvme/nvme_auth.o 00:03:03.573 CC lib/nvme/nvme_cuse.o 00:03:03.573 CC lib/nvme/nvme_vfio_user.o 00:03:03.573 CC lib/nvme/nvme_rdma.o 00:03:04.165 LIB libspdk_thread.a 00:03:04.165 SO libspdk_thread.so.10.1 00:03:04.165 SYMLINK libspdk_thread.so 00:03:04.424 CC lib/accel/accel.o 00:03:04.424 CC lib/accel/accel_rpc.o 00:03:04.424 CC lib/accel/accel_sw.o 00:03:04.424 CC lib/blob/blobstore.o 00:03:04.424 CC lib/blob/zeroes.o 00:03:04.424 CC lib/blob/request.o 00:03:04.424 CC lib/blob/blob_bs_dev.o 00:03:04.424 CC lib/init/json_config.o 00:03:04.424 CC lib/init/subsystem.o 00:03:04.424 CC lib/init/subsystem_rpc.o 00:03:04.424 CC lib/init/rpc.o 00:03:04.424 CC lib/vfu_tgt/tgt_endpoint.o 00:03:04.424 CC lib/vfu_tgt/tgt_rpc.o 00:03:04.424 CC lib/virtio/virtio.o 00:03:04.424 CC lib/virtio/virtio_vhost_user.o 00:03:04.424 CC lib/virtio/virtio_vfio_user.o 00:03:04.424 CC lib/virtio/virtio_pci.o 00:03:04.685 LIB libspdk_init.a 00:03:04.685 SO libspdk_init.so.5.0 00:03:04.945 LIB libspdk_virtio.a 00:03:04.945 LIB libspdk_vfu_tgt.a 00:03:04.945 SO libspdk_virtio.so.7.0 00:03:04.945 SO libspdk_vfu_tgt.so.3.0 00:03:04.945 SYMLINK libspdk_init.so 00:03:04.945 SYMLINK libspdk_virtio.so 00:03:04.945 SYMLINK libspdk_vfu_tgt.so 00:03:05.206 CC lib/event/app.o 00:03:05.206 CC lib/event/reactor.o 00:03:05.206 CC lib/event/log_rpc.o 00:03:05.206 CC lib/event/app_rpc.o 00:03:05.206 CC lib/event/scheduler_static.o 00:03:05.466 LIB libspdk_accel.a 00:03:05.466 SO libspdk_accel.so.15.1 00:03:05.466 LIB libspdk_nvme.a 00:03:05.466 SYMLINK libspdk_accel.so 00:03:05.466 SO libspdk_nvme.so.13.1 00:03:05.466 LIB libspdk_event.a 00:03:05.727 SO libspdk_event.so.14.0 00:03:05.727 SYMLINK libspdk_event.so 00:03:05.727 CC lib/bdev/bdev.o 00:03:05.727 CC lib/bdev/bdev_rpc.o 00:03:05.727 CC lib/bdev/bdev_zone.o 00:03:05.727 CC lib/bdev/part.o 00:03:05.727 CC lib/bdev/scsi_nvme.o 00:03:05.989 SYMLINK libspdk_nvme.so 00:03:06.928 LIB libspdk_blob.a 00:03:07.190 SO libspdk_blob.so.11.0 00:03:07.190 SYMLINK libspdk_blob.so 00:03:07.451 CC lib/blobfs/blobfs.o 00:03:07.451 CC lib/blobfs/tree.o 00:03:07.451 CC lib/lvol/lvol.o 00:03:08.024 LIB libspdk_bdev.a 00:03:08.024 SO libspdk_bdev.so.15.1 00:03:08.286 SYMLINK libspdk_bdev.so 00:03:08.286 LIB libspdk_blobfs.a 00:03:08.286 SO libspdk_blobfs.so.10.0 00:03:08.286 LIB libspdk_lvol.a 00:03:08.286 SO libspdk_lvol.so.10.0 00:03:08.286 SYMLINK libspdk_blobfs.so 00:03:08.286 SYMLINK libspdk_lvol.so 00:03:08.545 CC lib/nvmf/ctrlr.o 00:03:08.545 CC lib/scsi/dev.o 00:03:08.545 CC lib/nvmf/ctrlr_discovery.o 00:03:08.545 CC lib/nvmf/ctrlr_bdev.o 00:03:08.545 CC lib/nbd/nbd.o 00:03:08.545 CC lib/scsi/lun.o 00:03:08.545 CC lib/scsi/port.o 00:03:08.545 CC lib/nvmf/subsystem.o 00:03:08.545 CC lib/nbd/nbd_rpc.o 00:03:08.545 CC lib/scsi/scsi.o 00:03:08.545 CC lib/nvmf/nvmf.o 00:03:08.545 CC lib/scsi/scsi_bdev.o 00:03:08.545 CC lib/scsi/scsi_pr.o 00:03:08.545 CC lib/nvmf/nvmf_rpc.o 00:03:08.545 CC lib/nvmf/transport.o 00:03:08.545 CC lib/scsi/scsi_rpc.o 00:03:08.545 CC lib/scsi/task.o 00:03:08.545 CC lib/nvmf/tcp.o 00:03:08.545 CC lib/nvmf/stubs.o 00:03:08.545 CC lib/nvmf/vfio_user.o 00:03:08.545 CC lib/nvmf/mdns_server.o 00:03:08.545 CC lib/nvmf/rdma.o 00:03:08.545 CC lib/nvmf/auth.o 00:03:08.545 CC lib/ftl/ftl_core.o 00:03:08.545 CC lib/ftl/ftl_init.o 00:03:08.545 CC lib/ftl/ftl_layout.o 00:03:08.545 CC lib/ublk/ublk.o 00:03:08.545 CC lib/ftl/ftl_debug.o 00:03:08.545 CC lib/ublk/ublk_rpc.o 00:03:08.545 CC lib/ftl/ftl_io.o 00:03:08.545 CC lib/ftl/ftl_sb.o 00:03:08.545 CC lib/ftl/ftl_l2p.o 00:03:08.545 CC lib/ftl/ftl_l2p_flat.o 00:03:08.545 CC lib/ftl/ftl_nv_cache.o 00:03:08.545 CC lib/ftl/ftl_band.o 00:03:08.545 CC lib/ftl/ftl_band_ops.o 00:03:08.545 CC lib/ftl/ftl_writer.o 00:03:08.545 CC lib/ftl/ftl_rq.o 00:03:08.545 CC lib/ftl/ftl_reloc.o 00:03:08.545 CC lib/ftl/ftl_l2p_cache.o 00:03:08.545 CC lib/ftl/ftl_p2l.o 00:03:08.545 CC lib/ftl/mngt/ftl_mngt.o 00:03:08.545 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:08.545 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:08.545 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:08.545 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:08.545 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:08.545 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:08.545 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:08.545 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:08.545 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:08.545 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:08.545 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:08.545 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:08.545 CC lib/ftl/utils/ftl_conf.o 00:03:08.545 CC lib/ftl/utils/ftl_md.o 00:03:08.545 CC lib/ftl/utils/ftl_mempool.o 00:03:08.545 CC lib/ftl/utils/ftl_bitmap.o 00:03:08.545 CC lib/ftl/utils/ftl_property.o 00:03:08.545 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:08.545 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:08.545 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:08.545 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:08.545 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:08.545 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:08.545 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:08.545 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:08.545 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:08.545 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:08.545 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:08.545 CC lib/ftl/base/ftl_base_dev.o 00:03:08.545 CC lib/ftl/base/ftl_base_bdev.o 00:03:08.545 CC lib/ftl/ftl_trace.o 00:03:09.116 LIB libspdk_nbd.a 00:03:09.116 LIB libspdk_scsi.a 00:03:09.116 SO libspdk_nbd.so.7.0 00:03:09.116 SO libspdk_scsi.so.9.0 00:03:09.116 SYMLINK libspdk_nbd.so 00:03:09.116 SYMLINK libspdk_scsi.so 00:03:09.116 LIB libspdk_ublk.a 00:03:09.377 SO libspdk_ublk.so.3.0 00:03:09.377 SYMLINK libspdk_ublk.so 00:03:09.377 CC lib/iscsi/conn.o 00:03:09.377 CC lib/iscsi/init_grp.o 00:03:09.377 CC lib/iscsi/iscsi.o 00:03:09.377 CC lib/iscsi/md5.o 00:03:09.377 CC lib/iscsi/param.o 00:03:09.377 CC lib/iscsi/portal_grp.o 00:03:09.377 CC lib/iscsi/tgt_node.o 00:03:09.377 CC lib/iscsi/iscsi_subsystem.o 00:03:09.377 CC lib/iscsi/iscsi_rpc.o 00:03:09.661 CC lib/iscsi/task.o 00:03:09.661 CC lib/vhost/vhost.o 00:03:09.661 CC lib/vhost/vhost_rpc.o 00:03:09.661 CC lib/vhost/vhost_scsi.o 00:03:09.661 CC lib/vhost/vhost_blk.o 00:03:09.661 CC lib/vhost/rte_vhost_user.o 00:03:09.661 LIB libspdk_ftl.a 00:03:09.661 SO libspdk_ftl.so.9.0 00:03:10.233 SYMLINK libspdk_ftl.so 00:03:10.494 LIB libspdk_nvmf.a 00:03:10.494 SO libspdk_nvmf.so.18.1 00:03:10.494 LIB libspdk_vhost.a 00:03:10.494 SO libspdk_vhost.so.8.0 00:03:10.755 SYMLINK libspdk_vhost.so 00:03:10.755 SYMLINK libspdk_nvmf.so 00:03:10.755 LIB libspdk_iscsi.a 00:03:10.755 SO libspdk_iscsi.so.8.0 00:03:11.016 SYMLINK libspdk_iscsi.so 00:03:11.589 CC module/vfu_device/vfu_virtio.o 00:03:11.589 CC module/env_dpdk/env_dpdk_rpc.o 00:03:11.589 CC module/vfu_device/vfu_virtio_blk.o 00:03:11.589 CC module/vfu_device/vfu_virtio_scsi.o 00:03:11.589 CC module/vfu_device/vfu_virtio_rpc.o 00:03:11.589 CC module/accel/error/accel_error.o 00:03:11.589 CC module/sock/posix/posix.o 00:03:11.589 CC module/accel/error/accel_error_rpc.o 00:03:11.589 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:11.589 LIB libspdk_env_dpdk_rpc.a 00:03:11.589 CC module/scheduler/gscheduler/gscheduler.o 00:03:11.589 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:11.589 CC module/accel/dsa/accel_dsa.o 00:03:11.589 CC module/keyring/file/keyring.o 00:03:11.589 CC module/keyring/file/keyring_rpc.o 00:03:11.589 CC module/accel/dsa/accel_dsa_rpc.o 00:03:11.589 CC module/accel/iaa/accel_iaa_rpc.o 00:03:11.589 CC module/accel/iaa/accel_iaa.o 00:03:11.589 CC module/accel/ioat/accel_ioat.o 00:03:11.589 CC module/keyring/linux/keyring.o 00:03:11.589 CC module/keyring/linux/keyring_rpc.o 00:03:11.589 CC module/blob/bdev/blob_bdev.o 00:03:11.589 CC module/accel/ioat/accel_ioat_rpc.o 00:03:11.589 SO libspdk_env_dpdk_rpc.so.6.0 00:03:11.850 SYMLINK libspdk_env_dpdk_rpc.so 00:03:11.850 LIB libspdk_scheduler_gscheduler.a 00:03:11.850 LIB libspdk_keyring_linux.a 00:03:11.850 LIB libspdk_accel_error.a 00:03:11.850 LIB libspdk_keyring_file.a 00:03:11.850 LIB libspdk_scheduler_dpdk_governor.a 00:03:11.850 LIB libspdk_scheduler_dynamic.a 00:03:11.850 SO libspdk_scheduler_gscheduler.so.4.0 00:03:11.850 SO libspdk_keyring_linux.so.1.0 00:03:11.850 LIB libspdk_accel_ioat.a 00:03:11.850 SO libspdk_accel_error.so.2.0 00:03:11.850 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:11.850 SO libspdk_keyring_file.so.1.0 00:03:11.850 LIB libspdk_accel_iaa.a 00:03:11.850 SO libspdk_scheduler_dynamic.so.4.0 00:03:11.850 LIB libspdk_accel_dsa.a 00:03:11.850 SO libspdk_accel_ioat.so.6.0 00:03:11.850 SYMLINK libspdk_scheduler_gscheduler.so 00:03:11.850 SYMLINK libspdk_keyring_linux.so 00:03:11.850 LIB libspdk_blob_bdev.a 00:03:11.850 SO libspdk_accel_iaa.so.3.0 00:03:11.850 SO libspdk_accel_dsa.so.5.0 00:03:11.850 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:11.850 SYMLINK libspdk_accel_error.so 00:03:11.850 SYMLINK libspdk_keyring_file.so 00:03:11.850 SYMLINK libspdk_scheduler_dynamic.so 00:03:11.850 SO libspdk_blob_bdev.so.11.0 00:03:12.111 SYMLINK libspdk_accel_ioat.so 00:03:12.111 SYMLINK libspdk_accel_iaa.so 00:03:12.111 SYMLINK libspdk_accel_dsa.so 00:03:12.111 LIB libspdk_vfu_device.a 00:03:12.111 SYMLINK libspdk_blob_bdev.so 00:03:12.111 SO libspdk_vfu_device.so.3.0 00:03:12.111 LIB libspdk_sock_posix.a 00:03:12.111 SYMLINK libspdk_vfu_device.so 00:03:12.111 SO libspdk_sock_posix.so.6.0 00:03:12.372 SYMLINK libspdk_sock_posix.so 00:03:12.632 CC module/bdev/malloc/bdev_malloc.o 00:03:12.632 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:12.632 CC module/bdev/nvme/bdev_nvme.o 00:03:12.632 CC module/bdev/split/vbdev_split.o 00:03:12.632 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:12.632 CC module/bdev/split/vbdev_split_rpc.o 00:03:12.632 CC module/bdev/nvme/nvme_rpc.o 00:03:12.632 CC module/bdev/nvme/bdev_mdns_client.o 00:03:12.632 CC module/bdev/nvme/vbdev_opal.o 00:03:12.632 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:12.632 CC module/bdev/null/bdev_null.o 00:03:12.632 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:12.632 CC module/bdev/passthru/vbdev_passthru.o 00:03:12.632 CC module/bdev/error/vbdev_error.o 00:03:12.632 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:12.632 CC module/bdev/null/bdev_null_rpc.o 00:03:12.632 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:12.632 CC module/bdev/delay/vbdev_delay.o 00:03:12.632 CC module/bdev/error/vbdev_error_rpc.o 00:03:12.632 CC module/bdev/aio/bdev_aio_rpc.o 00:03:12.632 CC module/bdev/aio/bdev_aio.o 00:03:12.632 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:12.632 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:12.632 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:12.632 CC module/bdev/gpt/gpt.o 00:03:12.632 CC module/bdev/gpt/vbdev_gpt.o 00:03:12.632 CC module/blobfs/bdev/blobfs_bdev.o 00:03:12.632 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:12.632 CC module/bdev/lvol/vbdev_lvol.o 00:03:12.632 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:12.632 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:12.632 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:12.632 CC module/bdev/raid/bdev_raid.o 00:03:12.632 CC module/bdev/raid/bdev_raid_rpc.o 00:03:12.632 CC module/bdev/ftl/bdev_ftl.o 00:03:12.632 CC module/bdev/raid/bdev_raid_sb.o 00:03:12.632 CC module/bdev/raid/raid0.o 00:03:12.632 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:12.632 CC module/bdev/raid/raid1.o 00:03:12.632 CC module/bdev/raid/concat.o 00:03:12.632 CC module/bdev/iscsi/bdev_iscsi.o 00:03:12.632 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:12.892 LIB libspdk_blobfs_bdev.a 00:03:12.892 LIB libspdk_bdev_split.a 00:03:12.892 LIB libspdk_bdev_null.a 00:03:12.892 LIB libspdk_bdev_error.a 00:03:12.892 SO libspdk_blobfs_bdev.so.6.0 00:03:12.892 LIB libspdk_bdev_zone_block.a 00:03:12.892 LIB libspdk_bdev_gpt.a 00:03:12.892 SO libspdk_bdev_split.so.6.0 00:03:12.892 SO libspdk_bdev_zone_block.so.6.0 00:03:12.892 SO libspdk_bdev_null.so.6.0 00:03:12.892 SO libspdk_bdev_error.so.6.0 00:03:12.892 LIB libspdk_bdev_passthru.a 00:03:12.892 SO libspdk_bdev_gpt.so.6.0 00:03:12.892 LIB libspdk_bdev_malloc.a 00:03:12.892 LIB libspdk_bdev_ftl.a 00:03:12.892 SYMLINK libspdk_blobfs_bdev.so 00:03:12.892 SO libspdk_bdev_passthru.so.6.0 00:03:12.892 LIB libspdk_bdev_aio.a 00:03:12.892 SYMLINK libspdk_bdev_zone_block.so 00:03:12.892 LIB libspdk_bdev_delay.a 00:03:12.892 SO libspdk_bdev_ftl.so.6.0 00:03:12.892 SYMLINK libspdk_bdev_split.so 00:03:12.892 SYMLINK libspdk_bdev_error.so 00:03:12.892 SO libspdk_bdev_malloc.so.6.0 00:03:12.892 SYMLINK libspdk_bdev_null.so 00:03:12.892 SO libspdk_bdev_aio.so.6.0 00:03:12.892 LIB libspdk_bdev_iscsi.a 00:03:12.892 SYMLINK libspdk_bdev_gpt.so 00:03:12.892 SO libspdk_bdev_delay.so.6.0 00:03:12.892 SYMLINK libspdk_bdev_passthru.so 00:03:12.892 SO libspdk_bdev_iscsi.so.6.0 00:03:12.892 SYMLINK libspdk_bdev_ftl.so 00:03:13.154 SYMLINK libspdk_bdev_malloc.so 00:03:13.154 SYMLINK libspdk_bdev_aio.so 00:03:13.154 SYMLINK libspdk_bdev_delay.so 00:03:13.154 LIB libspdk_bdev_lvol.a 00:03:13.154 LIB libspdk_bdev_virtio.a 00:03:13.154 SYMLINK libspdk_bdev_iscsi.so 00:03:13.154 SO libspdk_bdev_lvol.so.6.0 00:03:13.154 SO libspdk_bdev_virtio.so.6.0 00:03:13.154 SYMLINK libspdk_bdev_virtio.so 00:03:13.154 SYMLINK libspdk_bdev_lvol.so 00:03:13.415 LIB libspdk_bdev_raid.a 00:03:13.415 SO libspdk_bdev_raid.so.6.0 00:03:13.675 SYMLINK libspdk_bdev_raid.so 00:03:14.618 LIB libspdk_bdev_nvme.a 00:03:14.618 SO libspdk_bdev_nvme.so.7.0 00:03:14.618 SYMLINK libspdk_bdev_nvme.so 00:03:15.561 CC module/event/subsystems/iobuf/iobuf.o 00:03:15.561 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:15.561 CC module/event/subsystems/vmd/vmd.o 00:03:15.561 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:15.561 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:15.561 CC module/event/subsystems/scheduler/scheduler.o 00:03:15.561 CC module/event/subsystems/sock/sock.o 00:03:15.561 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:15.561 CC module/event/subsystems/keyring/keyring.o 00:03:15.561 LIB libspdk_event_vfu_tgt.a 00:03:15.561 LIB libspdk_event_iobuf.a 00:03:15.561 LIB libspdk_event_vhost_blk.a 00:03:15.561 LIB libspdk_event_scheduler.a 00:03:15.561 LIB libspdk_event_keyring.a 00:03:15.561 LIB libspdk_event_vmd.a 00:03:15.561 LIB libspdk_event_sock.a 00:03:15.561 SO libspdk_event_vfu_tgt.so.3.0 00:03:15.561 SO libspdk_event_iobuf.so.3.0 00:03:15.561 SO libspdk_event_vhost_blk.so.3.0 00:03:15.561 SO libspdk_event_scheduler.so.4.0 00:03:15.561 SO libspdk_event_keyring.so.1.0 00:03:15.561 SO libspdk_event_vmd.so.6.0 00:03:15.561 SO libspdk_event_sock.so.5.0 00:03:15.561 SYMLINK libspdk_event_vfu_tgt.so 00:03:15.561 SYMLINK libspdk_event_iobuf.so 00:03:15.561 SYMLINK libspdk_event_vhost_blk.so 00:03:15.561 SYMLINK libspdk_event_scheduler.so 00:03:15.561 SYMLINK libspdk_event_keyring.so 00:03:15.561 SYMLINK libspdk_event_sock.so 00:03:15.561 SYMLINK libspdk_event_vmd.so 00:03:16.137 CC module/event/subsystems/accel/accel.o 00:03:16.137 LIB libspdk_event_accel.a 00:03:16.137 SO libspdk_event_accel.so.6.0 00:03:16.137 SYMLINK libspdk_event_accel.so 00:03:16.708 CC module/event/subsystems/bdev/bdev.o 00:03:16.708 LIB libspdk_event_bdev.a 00:03:16.708 SO libspdk_event_bdev.so.6.0 00:03:16.970 SYMLINK libspdk_event_bdev.so 00:03:17.232 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:17.232 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:17.232 CC module/event/subsystems/scsi/scsi.o 00:03:17.232 CC module/event/subsystems/nbd/nbd.o 00:03:17.232 CC module/event/subsystems/ublk/ublk.o 00:03:17.494 LIB libspdk_event_nbd.a 00:03:17.494 LIB libspdk_event_ublk.a 00:03:17.494 LIB libspdk_event_scsi.a 00:03:17.494 SO libspdk_event_nbd.so.6.0 00:03:17.494 SO libspdk_event_ublk.so.3.0 00:03:17.494 LIB libspdk_event_nvmf.a 00:03:17.494 SO libspdk_event_scsi.so.6.0 00:03:17.494 SYMLINK libspdk_event_nbd.so 00:03:17.494 SO libspdk_event_nvmf.so.6.0 00:03:17.494 SYMLINK libspdk_event_ublk.so 00:03:17.494 SYMLINK libspdk_event_scsi.so 00:03:17.494 SYMLINK libspdk_event_nvmf.so 00:03:17.801 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:17.801 CC module/event/subsystems/iscsi/iscsi.o 00:03:18.063 LIB libspdk_event_vhost_scsi.a 00:03:18.063 LIB libspdk_event_iscsi.a 00:03:18.063 SO libspdk_event_vhost_scsi.so.3.0 00:03:18.063 SO libspdk_event_iscsi.so.6.0 00:03:18.063 SYMLINK libspdk_event_vhost_scsi.so 00:03:18.063 SYMLINK libspdk_event_iscsi.so 00:03:18.322 SO libspdk.so.6.0 00:03:18.322 SYMLINK libspdk.so 00:03:18.893 CC app/spdk_nvme_perf/perf.o 00:03:18.893 CXX app/trace/trace.o 00:03:18.893 CC app/spdk_nvme_identify/identify.o 00:03:18.893 CC app/spdk_nvme_discover/discovery_aer.o 00:03:18.893 TEST_HEADER include/spdk/accel.h 00:03:18.893 TEST_HEADER include/spdk/accel_module.h 00:03:18.893 TEST_HEADER include/spdk/assert.h 00:03:18.893 CC app/trace_record/trace_record.o 00:03:18.893 TEST_HEADER include/spdk/base64.h 00:03:18.893 TEST_HEADER include/spdk/barrier.h 00:03:18.893 TEST_HEADER include/spdk/bdev.h 00:03:18.893 TEST_HEADER include/spdk/bdev_module.h 00:03:18.893 CC app/spdk_lspci/spdk_lspci.o 00:03:18.893 CC app/spdk_top/spdk_top.o 00:03:18.893 TEST_HEADER include/spdk/bdev_zone.h 00:03:18.893 CC test/rpc_client/rpc_client_test.o 00:03:18.893 TEST_HEADER include/spdk/bit_array.h 00:03:18.893 TEST_HEADER include/spdk/bit_pool.h 00:03:18.893 TEST_HEADER include/spdk/blob_bdev.h 00:03:18.893 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:18.893 TEST_HEADER include/spdk/blob.h 00:03:18.893 TEST_HEADER include/spdk/blobfs.h 00:03:18.893 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:18.893 TEST_HEADER include/spdk/conf.h 00:03:18.893 TEST_HEADER include/spdk/config.h 00:03:18.893 TEST_HEADER include/spdk/cpuset.h 00:03:18.893 TEST_HEADER include/spdk/crc32.h 00:03:18.893 TEST_HEADER include/spdk/crc16.h 00:03:18.893 TEST_HEADER include/spdk/crc64.h 00:03:18.893 TEST_HEADER include/spdk/dif.h 00:03:18.893 TEST_HEADER include/spdk/endian.h 00:03:18.893 TEST_HEADER include/spdk/dma.h 00:03:18.893 TEST_HEADER include/spdk/env_dpdk.h 00:03:18.893 TEST_HEADER include/spdk/env.h 00:03:18.893 TEST_HEADER include/spdk/event.h 00:03:18.893 TEST_HEADER include/spdk/fd_group.h 00:03:18.893 TEST_HEADER include/spdk/fd.h 00:03:18.893 TEST_HEADER include/spdk/file.h 00:03:18.893 TEST_HEADER include/spdk/gpt_spec.h 00:03:18.893 TEST_HEADER include/spdk/ftl.h 00:03:18.893 TEST_HEADER include/spdk/histogram_data.h 00:03:18.893 TEST_HEADER include/spdk/hexlify.h 00:03:18.893 CC app/spdk_dd/spdk_dd.o 00:03:18.893 TEST_HEADER include/spdk/idxd_spec.h 00:03:18.893 TEST_HEADER include/spdk/idxd.h 00:03:18.893 TEST_HEADER include/spdk/init.h 00:03:18.893 TEST_HEADER include/spdk/ioat.h 00:03:18.893 CC app/nvmf_tgt/nvmf_main.o 00:03:18.893 TEST_HEADER include/spdk/ioat_spec.h 00:03:18.893 CC app/iscsi_tgt/iscsi_tgt.o 00:03:18.893 TEST_HEADER include/spdk/iscsi_spec.h 00:03:18.893 TEST_HEADER include/spdk/json.h 00:03:18.893 TEST_HEADER include/spdk/keyring.h 00:03:18.893 TEST_HEADER include/spdk/jsonrpc.h 00:03:18.893 TEST_HEADER include/spdk/keyring_module.h 00:03:18.893 TEST_HEADER include/spdk/likely.h 00:03:18.893 TEST_HEADER include/spdk/log.h 00:03:18.893 TEST_HEADER include/spdk/lvol.h 00:03:18.893 CC app/spdk_tgt/spdk_tgt.o 00:03:18.893 TEST_HEADER include/spdk/mmio.h 00:03:18.893 TEST_HEADER include/spdk/memory.h 00:03:18.893 TEST_HEADER include/spdk/nbd.h 00:03:18.893 TEST_HEADER include/spdk/notify.h 00:03:18.893 TEST_HEADER include/spdk/nvme.h 00:03:18.893 TEST_HEADER include/spdk/nvme_intel.h 00:03:18.893 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:18.893 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:18.893 TEST_HEADER include/spdk/nvme_spec.h 00:03:18.893 TEST_HEADER include/spdk/nvme_zns.h 00:03:18.893 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:18.893 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:18.893 TEST_HEADER include/spdk/nvmf.h 00:03:18.893 TEST_HEADER include/spdk/nvmf_spec.h 00:03:18.893 TEST_HEADER include/spdk/nvmf_transport.h 00:03:18.893 TEST_HEADER include/spdk/opal.h 00:03:18.893 TEST_HEADER include/spdk/opal_spec.h 00:03:18.893 TEST_HEADER include/spdk/pci_ids.h 00:03:18.893 TEST_HEADER include/spdk/pipe.h 00:03:18.893 TEST_HEADER include/spdk/queue.h 00:03:18.893 TEST_HEADER include/spdk/reduce.h 00:03:18.893 TEST_HEADER include/spdk/rpc.h 00:03:18.893 TEST_HEADER include/spdk/scheduler.h 00:03:18.893 TEST_HEADER include/spdk/scsi_spec.h 00:03:18.893 TEST_HEADER include/spdk/scsi.h 00:03:18.893 TEST_HEADER include/spdk/sock.h 00:03:18.893 TEST_HEADER include/spdk/stdinc.h 00:03:18.893 TEST_HEADER include/spdk/string.h 00:03:18.893 TEST_HEADER include/spdk/thread.h 00:03:18.893 TEST_HEADER include/spdk/trace.h 00:03:18.893 TEST_HEADER include/spdk/trace_parser.h 00:03:18.893 TEST_HEADER include/spdk/tree.h 00:03:18.893 TEST_HEADER include/spdk/ublk.h 00:03:18.893 TEST_HEADER include/spdk/util.h 00:03:18.893 TEST_HEADER include/spdk/uuid.h 00:03:18.893 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:18.893 TEST_HEADER include/spdk/version.h 00:03:18.893 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:18.893 TEST_HEADER include/spdk/vhost.h 00:03:18.893 TEST_HEADER include/spdk/vmd.h 00:03:18.893 TEST_HEADER include/spdk/xor.h 00:03:18.893 TEST_HEADER include/spdk/zipf.h 00:03:18.893 CXX test/cpp_headers/accel_module.o 00:03:18.893 CXX test/cpp_headers/accel.o 00:03:18.893 CXX test/cpp_headers/assert.o 00:03:18.893 CXX test/cpp_headers/barrier.o 00:03:18.893 CXX test/cpp_headers/base64.o 00:03:18.893 CXX test/cpp_headers/bdev.o 00:03:18.893 CXX test/cpp_headers/bdev_module.o 00:03:18.893 CXX test/cpp_headers/bdev_zone.o 00:03:18.893 CXX test/cpp_headers/bit_array.o 00:03:18.893 CXX test/cpp_headers/blob_bdev.o 00:03:18.893 CXX test/cpp_headers/bit_pool.o 00:03:18.893 CXX test/cpp_headers/blobfs_bdev.o 00:03:18.893 CXX test/cpp_headers/blobfs.o 00:03:18.893 CXX test/cpp_headers/blob.o 00:03:18.893 CXX test/cpp_headers/conf.o 00:03:18.893 CXX test/cpp_headers/config.o 00:03:18.893 CXX test/cpp_headers/cpuset.o 00:03:18.893 CXX test/cpp_headers/crc32.o 00:03:18.893 CXX test/cpp_headers/crc16.o 00:03:18.893 CXX test/cpp_headers/crc64.o 00:03:18.893 CXX test/cpp_headers/dma.o 00:03:18.893 CXX test/cpp_headers/dif.o 00:03:18.893 CXX test/cpp_headers/env_dpdk.o 00:03:18.893 CXX test/cpp_headers/env.o 00:03:18.893 CXX test/cpp_headers/endian.o 00:03:18.893 CXX test/cpp_headers/fd_group.o 00:03:18.893 CXX test/cpp_headers/event.o 00:03:18.893 CXX test/cpp_headers/fd.o 00:03:18.893 CXX test/cpp_headers/ftl.o 00:03:18.893 CXX test/cpp_headers/gpt_spec.o 00:03:18.893 CXX test/cpp_headers/file.o 00:03:18.893 CC examples/ioat/verify/verify.o 00:03:18.893 CXX test/cpp_headers/hexlify.o 00:03:18.893 CXX test/cpp_headers/histogram_data.o 00:03:18.893 CXX test/cpp_headers/idxd_spec.o 00:03:18.893 CXX test/cpp_headers/idxd.o 00:03:18.893 CXX test/cpp_headers/init.o 00:03:18.893 CXX test/cpp_headers/ioat_spec.o 00:03:18.893 CXX test/cpp_headers/ioat.o 00:03:18.893 CXX test/cpp_headers/json.o 00:03:18.893 CXX test/cpp_headers/jsonrpc.o 00:03:18.893 CXX test/cpp_headers/iscsi_spec.o 00:03:18.893 CXX test/cpp_headers/keyring_module.o 00:03:18.893 CC test/app/jsoncat/jsoncat.o 00:03:18.893 CXX test/cpp_headers/keyring.o 00:03:18.893 CXX test/cpp_headers/likely.o 00:03:18.893 CXX test/cpp_headers/lvol.o 00:03:18.893 CXX test/cpp_headers/log.o 00:03:18.893 CC examples/util/zipf/zipf.o 00:03:18.893 CXX test/cpp_headers/mmio.o 00:03:18.893 CXX test/cpp_headers/notify.o 00:03:18.893 CXX test/cpp_headers/nvme.o 00:03:18.893 CXX test/cpp_headers/nbd.o 00:03:18.893 CXX test/cpp_headers/memory.o 00:03:18.893 CC test/app/stub/stub.o 00:03:18.893 CXX test/cpp_headers/nvme_intel.o 00:03:18.893 CXX test/cpp_headers/nvme_zns.o 00:03:18.893 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:18.893 CXX test/cpp_headers/nvme_ocssd.o 00:03:18.893 CXX test/cpp_headers/nvme_spec.o 00:03:18.893 CXX test/cpp_headers/nvmf_cmd.o 00:03:18.893 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:18.893 CXX test/cpp_headers/nvmf.o 00:03:19.154 CXX test/cpp_headers/nvmf_spec.o 00:03:19.154 CC examples/ioat/perf/perf.o 00:03:19.154 CC test/thread/poller_perf/poller_perf.o 00:03:19.154 CXX test/cpp_headers/opal.o 00:03:19.154 CXX test/cpp_headers/nvmf_transport.o 00:03:19.154 CXX test/cpp_headers/opal_spec.o 00:03:19.154 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:19.154 CC test/app/histogram_perf/histogram_perf.o 00:03:19.154 CXX test/cpp_headers/pci_ids.o 00:03:19.154 CC test/env/memory/memory_ut.o 00:03:19.154 CXX test/cpp_headers/queue.o 00:03:19.154 CXX test/cpp_headers/pipe.o 00:03:19.154 CXX test/cpp_headers/reduce.o 00:03:19.154 CC test/dma/test_dma/test_dma.o 00:03:19.155 CXX test/cpp_headers/rpc.o 00:03:19.155 CXX test/cpp_headers/scsi_spec.o 00:03:19.155 CXX test/cpp_headers/scheduler.o 00:03:19.155 CXX test/cpp_headers/scsi.o 00:03:19.155 CXX test/cpp_headers/stdinc.o 00:03:19.155 CXX test/cpp_headers/sock.o 00:03:19.155 CXX test/cpp_headers/string.o 00:03:19.155 CXX test/cpp_headers/thread.o 00:03:19.155 CXX test/cpp_headers/trace.o 00:03:19.155 CC app/fio/nvme/fio_plugin.o 00:03:19.155 CXX test/cpp_headers/ublk.o 00:03:19.155 CXX test/cpp_headers/tree.o 00:03:19.155 CXX test/cpp_headers/trace_parser.o 00:03:19.155 LINK spdk_lspci 00:03:19.155 CC test/app/bdev_svc/bdev_svc.o 00:03:19.155 CXX test/cpp_headers/util.o 00:03:19.155 CXX test/cpp_headers/uuid.o 00:03:19.155 CC test/env/pci/pci_ut.o 00:03:19.155 CXX test/cpp_headers/vfio_user_pci.o 00:03:19.155 CC test/env/vtophys/vtophys.o 00:03:19.155 CXX test/cpp_headers/version.o 00:03:19.155 CXX test/cpp_headers/xor.o 00:03:19.155 CXX test/cpp_headers/vfio_user_spec.o 00:03:19.155 CXX test/cpp_headers/vhost.o 00:03:19.155 CXX test/cpp_headers/vmd.o 00:03:19.155 CXX test/cpp_headers/zipf.o 00:03:19.155 LINK spdk_nvme_discover 00:03:19.155 CC app/fio/bdev/fio_plugin.o 00:03:19.155 LINK rpc_client_test 00:03:19.155 LINK interrupt_tgt 00:03:19.417 LINK nvmf_tgt 00:03:19.417 CC test/env/mem_callbacks/mem_callbacks.o 00:03:19.417 LINK spdk_trace_record 00:03:19.417 LINK iscsi_tgt 00:03:19.417 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:19.417 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:19.417 LINK spdk_tgt 00:03:19.417 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:19.417 LINK zipf 00:03:19.417 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:19.417 LINK spdk_trace 00:03:19.417 LINK jsoncat 00:03:19.676 LINK verify 00:03:19.676 LINK stub 00:03:19.676 LINK spdk_dd 00:03:19.676 LINK ioat_perf 00:03:19.676 LINK histogram_perf 00:03:19.676 LINK poller_perf 00:03:19.676 LINK env_dpdk_post_init 00:03:19.676 LINK vtophys 00:03:19.676 LINK bdev_svc 00:03:19.676 LINK test_dma 00:03:19.936 CC app/vhost/vhost.o 00:03:19.936 CC examples/idxd/perf/perf.o 00:03:19.936 CC examples/sock/hello_world/hello_sock.o 00:03:19.936 CC examples/vmd/lsvmd/lsvmd.o 00:03:19.936 LINK nvme_fuzz 00:03:19.936 CC examples/vmd/led/led.o 00:03:19.936 CC examples/thread/thread/thread_ex.o 00:03:19.936 LINK pci_ut 00:03:19.936 LINK spdk_bdev 00:03:19.936 LINK vhost_fuzz 00:03:20.197 LINK spdk_nvme 00:03:20.197 LINK vhost 00:03:20.197 LINK mem_callbacks 00:03:20.197 LINK lsvmd 00:03:20.197 LINK spdk_nvme_identify 00:03:20.197 LINK led 00:03:20.197 LINK spdk_nvme_perf 00:03:20.197 CC test/event/event_perf/event_perf.o 00:03:20.197 LINK spdk_top 00:03:20.197 LINK hello_sock 00:03:20.197 CC test/event/reactor_perf/reactor_perf.o 00:03:20.197 CC test/event/reactor/reactor.o 00:03:20.197 CC test/event/app_repeat/app_repeat.o 00:03:20.197 CC test/event/scheduler/scheduler.o 00:03:20.197 LINK idxd_perf 00:03:20.197 LINK thread 00:03:20.197 CC test/nvme/err_injection/err_injection.o 00:03:20.197 CC test/nvme/startup/startup.o 00:03:20.197 CC test/nvme/boot_partition/boot_partition.o 00:03:20.197 CC test/nvme/sgl/sgl.o 00:03:20.197 CC test/nvme/fused_ordering/fused_ordering.o 00:03:20.458 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:20.458 CC test/nvme/reset/reset.o 00:03:20.458 CC test/nvme/fdp/fdp.o 00:03:20.458 CC test/nvme/aer/aer.o 00:03:20.458 CC test/nvme/e2edp/nvme_dp.o 00:03:20.458 CC test/nvme/reserve/reserve.o 00:03:20.458 CC test/nvme/cuse/cuse.o 00:03:20.458 CC test/nvme/overhead/overhead.o 00:03:20.458 CC test/nvme/connect_stress/connect_stress.o 00:03:20.458 CC test/nvme/compliance/nvme_compliance.o 00:03:20.458 CC test/nvme/simple_copy/simple_copy.o 00:03:20.458 LINK reactor 00:03:20.458 CC test/accel/dif/dif.o 00:03:20.458 LINK event_perf 00:03:20.458 CC test/blobfs/mkfs/mkfs.o 00:03:20.458 LINK reactor_perf 00:03:20.458 LINK app_repeat 00:03:20.458 CC test/lvol/esnap/esnap.o 00:03:20.458 LINK startup 00:03:20.458 LINK memory_ut 00:03:20.458 LINK scheduler 00:03:20.458 LINK boot_partition 00:03:20.458 LINK err_injection 00:03:20.458 LINK connect_stress 00:03:20.458 LINK doorbell_aers 00:03:20.458 LINK fused_ordering 00:03:20.458 LINK sgl 00:03:20.458 LINK reserve 00:03:20.458 LINK simple_copy 00:03:20.721 LINK reset 00:03:20.721 LINK aer 00:03:20.721 LINK mkfs 00:03:20.721 LINK nvme_dp 00:03:20.721 LINK overhead 00:03:20.721 LINK nvme_compliance 00:03:20.721 CC examples/nvme/abort/abort.o 00:03:20.721 LINK fdp 00:03:20.721 CC examples/nvme/hello_world/hello_world.o 00:03:20.721 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:20.721 CC examples/nvme/hotplug/hotplug.o 00:03:20.721 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:20.721 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:20.721 CC examples/nvme/reconnect/reconnect.o 00:03:20.721 CC examples/nvme/arbitration/arbitration.o 00:03:20.721 CC examples/accel/perf/accel_perf.o 00:03:20.722 LINK dif 00:03:20.722 CC examples/blob/cli/blobcli.o 00:03:20.722 CC examples/blob/hello_world/hello_blob.o 00:03:20.981 LINK cmb_copy 00:03:20.981 LINK pmr_persistence 00:03:20.981 LINK hello_world 00:03:20.981 LINK hotplug 00:03:20.981 LINK iscsi_fuzz 00:03:20.981 LINK reconnect 00:03:20.981 LINK arbitration 00:03:20.981 LINK abort 00:03:20.981 LINK nvme_manage 00:03:20.981 LINK accel_perf 00:03:21.242 LINK hello_blob 00:03:21.242 LINK blobcli 00:03:21.242 CC test/bdev/bdevio/bdevio.o 00:03:21.503 LINK cuse 00:03:21.764 CC examples/bdev/hello_world/hello_bdev.o 00:03:21.764 CC examples/bdev/bdevperf/bdevperf.o 00:03:21.764 LINK bdevio 00:03:22.024 LINK hello_bdev 00:03:22.285 LINK bdevperf 00:03:22.858 CC examples/nvmf/nvmf/nvmf.o 00:03:23.119 LINK nvmf 00:03:24.505 LINK esnap 00:03:24.766 00:03:24.766 real 0m50.590s 00:03:24.766 user 6m29.352s 00:03:24.766 sys 4m9.681s 00:03:24.766 09:12:11 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:24.766 09:12:11 make -- common/autotest_common.sh@10 -- $ set +x 00:03:24.766 ************************************ 00:03:24.766 END TEST make 00:03:24.766 ************************************ 00:03:24.766 09:12:11 -- common/autotest_common.sh@1142 -- $ return 0 00:03:24.766 09:12:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:24.766 09:12:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:24.766 09:12:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:24.766 09:12:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.766 09:12:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:24.766 09:12:11 -- pm/common@44 -- $ pid=342687 00:03:24.766 09:12:11 -- pm/common@50 -- $ kill -TERM 342687 00:03:24.766 09:12:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.766 09:12:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:24.766 09:12:11 -- pm/common@44 -- $ pid=342688 00:03:24.766 09:12:11 -- pm/common@50 -- $ kill -TERM 342688 00:03:24.766 09:12:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.766 09:12:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:24.766 09:12:11 -- pm/common@44 -- $ pid=342690 00:03:24.766 09:12:11 -- pm/common@50 -- $ kill -TERM 342690 00:03:24.766 09:12:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.766 09:12:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:24.766 09:12:11 -- pm/common@44 -- $ pid=342713 00:03:24.766 09:12:11 -- pm/common@50 -- $ sudo -E kill -TERM 342713 00:03:25.026 09:12:12 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:25.026 09:12:12 -- nvmf/common.sh@7 -- # uname -s 00:03:25.026 09:12:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:25.026 09:12:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:25.027 09:12:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:25.027 09:12:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:25.027 09:12:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:25.027 09:12:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:25.027 09:12:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:25.027 09:12:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:25.027 09:12:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:25.027 09:12:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:25.027 09:12:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:03:25.027 09:12:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:03:25.027 09:12:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:25.027 09:12:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:25.027 09:12:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:25.027 09:12:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:25.027 09:12:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:25.027 09:12:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:25.027 09:12:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:25.027 09:12:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:25.027 09:12:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.027 09:12:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.027 09:12:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.027 09:12:12 -- paths/export.sh@5 -- # export PATH 00:03:25.027 09:12:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.027 09:12:12 -- nvmf/common.sh@47 -- # : 0 00:03:25.027 09:12:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:25.027 09:12:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:25.027 09:12:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:25.027 09:12:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:25.027 09:12:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:25.027 09:12:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:25.027 09:12:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:25.027 09:12:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:25.027 09:12:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:25.027 09:12:12 -- spdk/autotest.sh@32 -- # uname -s 00:03:25.027 09:12:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:25.027 09:12:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:25.027 09:12:12 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:25.027 09:12:12 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:25.027 09:12:12 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:25.027 09:12:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:25.027 09:12:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:25.027 09:12:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:25.027 09:12:12 -- spdk/autotest.sh@48 -- # udevadm_pid=406246 00:03:25.027 09:12:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:25.027 09:12:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:25.027 09:12:12 -- pm/common@17 -- # local monitor 00:03:25.027 09:12:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.027 09:12:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.027 09:12:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.027 09:12:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.027 09:12:12 -- pm/common@21 -- # date +%s 00:03:25.027 09:12:12 -- pm/common@21 -- # date +%s 00:03:25.027 09:12:12 -- pm/common@25 -- # sleep 1 00:03:25.027 09:12:12 -- pm/common@21 -- # date +%s 00:03:25.027 09:12:12 -- pm/common@21 -- # date +%s 00:03:25.027 09:12:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721027532 00:03:25.027 09:12:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721027532 00:03:25.027 09:12:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721027532 00:03:25.027 09:12:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721027532 00:03:25.027 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721027532_collect-vmstat.pm.log 00:03:25.027 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721027532_collect-cpu-load.pm.log 00:03:25.027 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721027532_collect-cpu-temp.pm.log 00:03:25.027 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721027532_collect-bmc-pm.bmc.pm.log 00:03:25.969 09:12:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:25.969 09:12:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:25.969 09:12:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:25.969 09:12:13 -- common/autotest_common.sh@10 -- # set +x 00:03:25.969 09:12:13 -- spdk/autotest.sh@59 -- # create_test_list 00:03:25.969 09:12:13 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:25.969 09:12:13 -- common/autotest_common.sh@10 -- # set +x 00:03:25.969 09:12:13 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:25.969 09:12:13 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:25.969 09:12:13 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:25.969 09:12:13 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:25.969 09:12:13 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:25.969 09:12:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:26.232 09:12:13 -- common/autotest_common.sh@1455 -- # uname 00:03:26.232 09:12:13 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:26.232 09:12:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:26.232 09:12:13 -- common/autotest_common.sh@1475 -- # uname 00:03:26.232 09:12:13 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:26.232 09:12:13 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:26.232 09:12:13 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:26.232 09:12:13 -- spdk/autotest.sh@72 -- # hash lcov 00:03:26.232 09:12:13 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:26.232 09:12:13 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:26.232 --rc lcov_branch_coverage=1 00:03:26.232 --rc lcov_function_coverage=1 00:03:26.232 --rc genhtml_branch_coverage=1 00:03:26.232 --rc genhtml_function_coverage=1 00:03:26.232 --rc genhtml_legend=1 00:03:26.232 --rc geninfo_all_blocks=1 00:03:26.232 ' 00:03:26.232 09:12:13 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:26.232 --rc lcov_branch_coverage=1 00:03:26.232 --rc lcov_function_coverage=1 00:03:26.232 --rc genhtml_branch_coverage=1 00:03:26.232 --rc genhtml_function_coverage=1 00:03:26.232 --rc genhtml_legend=1 00:03:26.232 --rc geninfo_all_blocks=1 00:03:26.232 ' 00:03:26.232 09:12:13 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:26.232 --rc lcov_branch_coverage=1 00:03:26.232 --rc lcov_function_coverage=1 00:03:26.232 --rc genhtml_branch_coverage=1 00:03:26.232 --rc genhtml_function_coverage=1 00:03:26.232 --rc genhtml_legend=1 00:03:26.232 --rc geninfo_all_blocks=1 00:03:26.232 --no-external' 00:03:26.232 09:12:13 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:26.232 --rc lcov_branch_coverage=1 00:03:26.232 --rc lcov_function_coverage=1 00:03:26.232 --rc genhtml_branch_coverage=1 00:03:26.232 --rc genhtml_function_coverage=1 00:03:26.232 --rc genhtml_legend=1 00:03:26.232 --rc geninfo_all_blocks=1 00:03:26.232 --no-external' 00:03:26.232 09:12:13 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:26.232 lcov: LCOV version 1.14 00:03:26.232 09:12:13 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:38.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:38.489 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:50.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:50.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:50.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:50.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:50.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:50.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:50.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:50.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:50.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:50.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:50.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:50.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:50.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:50.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:50.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:50.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:50.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:50.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:50.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:50.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:50.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:50.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:50.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:50.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:50.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:50.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:50.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:50.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:50.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:50.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:50.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:50.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:50.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:50.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:50.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:50.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:50.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:51.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:51.250 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:51.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:51.250 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:51.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:51.250 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:51.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:51.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:51.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:51.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:51.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:51.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:51.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:51.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:51.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:51.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:51.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:51.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:51.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:51.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:51.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:51.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:55.450 09:12:42 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:55.450 09:12:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:55.450 09:12:42 -- common/autotest_common.sh@10 -- # set +x 00:03:55.450 09:12:42 -- spdk/autotest.sh@91 -- # rm -f 00:03:55.450 09:12:42 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.810 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:58.810 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:58.810 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:59.071 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:59.071 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:59.071 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:59.071 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:59.071 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:59.071 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:59.071 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:59.071 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:59.071 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:59.071 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:59.071 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:59.071 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:59.331 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:59.331 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:59.331 09:12:46 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:59.331 09:12:46 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:59.331 09:12:46 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:59.331 09:12:46 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:59.331 09:12:46 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:59.331 09:12:46 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:59.331 09:12:46 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:59.331 09:12:46 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:59.331 09:12:46 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:59.331 09:12:46 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:59.331 09:12:46 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.331 09:12:46 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:59.331 09:12:46 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:59.331 09:12:46 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:59.331 09:12:46 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:59.331 No valid GPT data, bailing 00:03:59.331 09:12:46 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:59.331 09:12:46 -- scripts/common.sh@391 -- # pt= 00:03:59.331 09:12:46 -- scripts/common.sh@392 -- # return 1 00:03:59.331 09:12:46 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:59.331 1+0 records in 00:03:59.331 1+0 records out 00:03:59.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00652912 s, 161 MB/s 00:03:59.331 09:12:46 -- spdk/autotest.sh@118 -- # sync 00:03:59.331 09:12:46 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:59.331 09:12:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:59.331 09:12:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:07.469 09:12:54 -- spdk/autotest.sh@124 -- # uname -s 00:04:07.469 09:12:54 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:07.469 09:12:54 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:07.469 09:12:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.469 09:12:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.469 09:12:54 -- common/autotest_common.sh@10 -- # set +x 00:04:07.469 ************************************ 00:04:07.469 START TEST setup.sh 00:04:07.469 ************************************ 00:04:07.469 09:12:54 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:07.469 * Looking for test storage... 00:04:07.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:07.469 09:12:54 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:07.469 09:12:54 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:07.469 09:12:54 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:07.469 09:12:54 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.469 09:12:54 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.469 09:12:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.469 ************************************ 00:04:07.469 START TEST acl 00:04:07.469 ************************************ 00:04:07.470 09:12:54 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:07.470 * Looking for test storage... 00:04:07.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:07.470 09:12:54 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:07.470 09:12:54 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:07.470 09:12:54 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:07.470 09:12:54 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:07.470 09:12:54 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:07.470 09:12:54 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:07.470 09:12:54 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:07.470 09:12:54 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:07.470 09:12:54 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:07.470 09:12:54 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:07.470 09:12:54 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:07.470 09:12:54 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:07.470 09:12:54 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:07.470 09:12:54 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:07.470 09:12:54 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.470 09:12:54 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:11.670 09:12:58 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:11.670 09:12:58 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:11.670 09:12:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.670 09:12:58 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:11.670 09:12:58 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.670 09:12:58 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:14.963 Hugepages 00:04:14.963 node hugesize free / total 00:04:14.963 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:14.963 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:14.963 09:13:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.963 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:14.963 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:14.963 09:13:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.963 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:14.963 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:14.963 09:13:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.963 00:04:14.963 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:14.963 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:14.963 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:14.963 09:13:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.963 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.964 09:13:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.964 09:13:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:04:14.964 09:13:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.964 09:13:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.964 09:13:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.964 09:13:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:04:14.964 09:13:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.964 09:13:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.964 09:13:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.964 09:13:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:04:14.964 09:13:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.964 09:13:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.964 09:13:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.964 09:13:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:04:14.964 09:13:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.964 09:13:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.964 09:13:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.964 09:13:02 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:14.964 09:13:02 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:14.964 09:13:02 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.964 09:13:02 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.964 09:13:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:14.964 ************************************ 00:04:14.964 START TEST denied 00:04:14.964 ************************************ 00:04:14.964 09:13:02 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:14.964 09:13:02 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:04:14.964 09:13:02 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:14.964 09:13:02 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:04:14.964 09:13:02 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.964 09:13:02 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.164 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:04:19.164 09:13:05 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:04:19.164 09:13:05 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:19.164 09:13:05 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:19.164 09:13:05 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:04:19.164 09:13:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:04:19.164 09:13:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:19.164 09:13:05 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:19.164 09:13:05 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:19.164 09:13:05 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.164 09:13:05 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.441 00:04:24.441 real 0m8.880s 00:04:24.441 user 0m3.005s 00:04:24.441 sys 0m5.221s 00:04:24.441 09:13:10 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.441 09:13:10 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:24.441 ************************************ 00:04:24.441 END TEST denied 00:04:24.441 ************************************ 00:04:24.441 09:13:10 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:24.441 09:13:10 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:24.441 09:13:10 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.441 09:13:10 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.441 09:13:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:24.441 ************************************ 00:04:24.441 START TEST allowed 00:04:24.441 ************************************ 00:04:24.441 09:13:11 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:24.441 09:13:11 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:04:24.441 09:13:11 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:24.441 09:13:11 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.441 09:13:11 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:04:24.441 09:13:11 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:29.720 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:29.720 09:13:16 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:29.720 09:13:16 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:29.720 09:13:16 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:29.720 09:13:16 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.720 09:13:16 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.925 00:04:33.925 real 0m9.952s 00:04:33.925 user 0m2.977s 00:04:33.925 sys 0m5.269s 00:04:33.925 09:13:20 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.925 09:13:20 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:33.925 ************************************ 00:04:33.925 END TEST allowed 00:04:33.925 ************************************ 00:04:33.925 09:13:21 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:33.925 00:04:33.925 real 0m26.753s 00:04:33.925 user 0m8.835s 00:04:33.925 sys 0m15.659s 00:04:33.925 09:13:21 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.925 09:13:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:33.925 ************************************ 00:04:33.925 END TEST acl 00:04:33.925 ************************************ 00:04:33.925 09:13:21 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:33.926 09:13:21 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:33.926 09:13:21 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.926 09:13:21 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.926 09:13:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:33.926 ************************************ 00:04:33.926 START TEST hugepages 00:04:33.926 ************************************ 00:04:33.926 09:13:21 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:34.188 * Looking for test storage... 00:04:34.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 105992676 kB' 'MemAvailable: 110585384 kB' 'Buffers: 9096 kB' 'Cached: 11260764 kB' 'SwapCached: 0 kB' 'Active: 7145704 kB' 'Inactive: 4667068 kB' 'Active(anon): 6754696 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546292 kB' 'Mapped: 183732 kB' 'Shmem: 6211784 kB' 'KReclaimable: 575576 kB' 'Slab: 1353664 kB' 'SReclaimable: 575576 kB' 'SUnreclaim: 778088 kB' 'KernelStack: 27280 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460880 kB' 'Committed_AS: 8318516 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237372 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:34.189 09:13:21 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:34.189 09:13:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.189 09:13:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.189 09:13:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:34.189 ************************************ 00:04:34.189 START TEST default_setup 00:04:34.189 ************************************ 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.189 09:13:21 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:38.428 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:38.428 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:38.428 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:38.428 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:38.428 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:38.428 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:38.428 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:38.428 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:38.428 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:38.428 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:38.428 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:38.428 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:38.428 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:38.428 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:38.428 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:38.428 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:38.428 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:38.428 09:13:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:38.428 09:13:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:38.428 09:13:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:38.428 09:13:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:38.428 09:13:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:38.428 09:13:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:38.428 09:13:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:38.428 09:13:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:38.428 09:13:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:38.428 09:13:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:38.428 09:13:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:38.428 09:13:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.428 09:13:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.428 09:13:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.428 09:13:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.428 09:13:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.428 09:13:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.428 09:13:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.428 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.428 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.428 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108154716 kB' 'MemAvailable: 112747416 kB' 'Buffers: 9096 kB' 'Cached: 11260880 kB' 'SwapCached: 0 kB' 'Active: 7163232 kB' 'Inactive: 4667068 kB' 'Active(anon): 6772224 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563296 kB' 'Mapped: 183168 kB' 'Shmem: 6211900 kB' 'KReclaimable: 575568 kB' 'Slab: 1351220 kB' 'SReclaimable: 575568 kB' 'SUnreclaim: 775652 kB' 'KernelStack: 27280 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8304592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237292 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:38.428 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.428 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.428 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.428 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.428 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.428 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.428 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.428 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.428 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.428 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.428 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.428 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.428 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.428 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.428 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.429 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108158424 kB' 'MemAvailable: 112751124 kB' 'Buffers: 9096 kB' 'Cached: 11260884 kB' 'SwapCached: 0 kB' 'Active: 7163280 kB' 'Inactive: 4667068 kB' 'Active(anon): 6772272 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563380 kB' 'Mapped: 183168 kB' 'Shmem: 6211904 kB' 'KReclaimable: 575568 kB' 'Slab: 1351172 kB' 'SReclaimable: 575568 kB' 'SUnreclaim: 775604 kB' 'KernelStack: 27200 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8304612 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237228 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.430 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.431 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108159608 kB' 'MemAvailable: 112752308 kB' 'Buffers: 9096 kB' 'Cached: 11260900 kB' 'SwapCached: 0 kB' 'Active: 7162116 kB' 'Inactive: 4667068 kB' 'Active(anon): 6771108 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562572 kB' 'Mapped: 183072 kB' 'Shmem: 6211920 kB' 'KReclaimable: 575568 kB' 'Slab: 1351204 kB' 'SReclaimable: 575568 kB' 'SUnreclaim: 775636 kB' 'KernelStack: 27216 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8304764 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237228 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.432 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.433 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:38.434 nr_hugepages=1024 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:38.434 resv_hugepages=0 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:38.434 surplus_hugepages=0 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:38.434 anon_hugepages=0 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108159852 kB' 'MemAvailable: 112752552 kB' 'Buffers: 9096 kB' 'Cached: 11260924 kB' 'SwapCached: 0 kB' 'Active: 7162184 kB' 'Inactive: 4667068 kB' 'Active(anon): 6771176 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562608 kB' 'Mapped: 183072 kB' 'Shmem: 6211944 kB' 'KReclaimable: 575568 kB' 'Slab: 1351204 kB' 'SReclaimable: 575568 kB' 'SUnreclaim: 775636 kB' 'KernelStack: 27232 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8304792 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237228 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.434 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.435 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58095456 kB' 'MemUsed: 7563552 kB' 'SwapCached: 0 kB' 'Active: 2254292 kB' 'Inactive: 1034540 kB' 'Active(anon): 2040956 kB' 'Inactive(anon): 0 kB' 'Active(file): 213336 kB' 'Inactive(file): 1034540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2846964 kB' 'Mapped: 75584 kB' 'AnonPages: 445144 kB' 'Shmem: 1599088 kB' 'KernelStack: 14664 kB' 'PageTables: 5636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195988 kB' 'Slab: 579264 kB' 'SReclaimable: 195988 kB' 'SUnreclaim: 383276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.436 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:38.437 node0=1024 expecting 1024 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:38.437 00:04:38.437 real 0m3.883s 00:04:38.437 user 0m1.493s 00:04:38.437 sys 0m2.328s 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.437 09:13:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:38.437 ************************************ 00:04:38.437 END TEST default_setup 00:04:38.437 ************************************ 00:04:38.437 09:13:25 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:38.437 09:13:25 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:38.437 09:13:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.437 09:13:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.437 09:13:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:38.437 ************************************ 00:04:38.437 START TEST per_node_1G_alloc 00:04:38.437 ************************************ 00:04:38.437 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:38.437 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:38.437 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:38.437 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:38.437 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:38.437 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:38.437 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:38.437 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:38.437 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:38.437 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:38.438 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:38.438 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:38.438 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.438 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:38.438 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:38.438 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.438 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.438 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:38.438 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:38.438 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:38.438 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:38.438 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:38.438 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:38.438 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:38.438 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:38.438 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:38.438 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.438 09:13:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.645 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:42.645 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:42.645 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:42.645 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:42.645 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:42.645 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:42.645 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:42.645 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:42.645 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:42.645 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:42.645 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:42.645 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:42.645 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:42.645 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:42.645 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:42.645 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:42.645 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108169848 kB' 'MemAvailable: 112762548 kB' 'Buffers: 9096 kB' 'Cached: 11261056 kB' 'SwapCached: 0 kB' 'Active: 7160884 kB' 'Inactive: 4667068 kB' 'Active(anon): 6769876 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561180 kB' 'Mapped: 182124 kB' 'Shmem: 6212076 kB' 'KReclaimable: 575568 kB' 'Slab: 1350504 kB' 'SReclaimable: 575568 kB' 'SUnreclaim: 774936 kB' 'KernelStack: 27328 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8294540 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237420 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.645 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.646 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108168680 kB' 'MemAvailable: 112761380 kB' 'Buffers: 9096 kB' 'Cached: 11261060 kB' 'SwapCached: 0 kB' 'Active: 7161388 kB' 'Inactive: 4667068 kB' 'Active(anon): 6770380 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561668 kB' 'Mapped: 182124 kB' 'Shmem: 6212080 kB' 'KReclaimable: 575568 kB' 'Slab: 1350496 kB' 'SReclaimable: 575568 kB' 'SUnreclaim: 774928 kB' 'KernelStack: 27424 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8294560 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237532 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.647 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.648 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108167332 kB' 'MemAvailable: 112760032 kB' 'Buffers: 9096 kB' 'Cached: 11261076 kB' 'SwapCached: 0 kB' 'Active: 7161328 kB' 'Inactive: 4667068 kB' 'Active(anon): 6770320 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561548 kB' 'Mapped: 182100 kB' 'Shmem: 6212096 kB' 'KReclaimable: 575568 kB' 'Slab: 1350556 kB' 'SReclaimable: 575568 kB' 'SUnreclaim: 774988 kB' 'KernelStack: 27392 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8294580 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237532 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.649 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.650 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:42.651 nr_hugepages=1024 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.651 resv_hugepages=0 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.651 surplus_hugepages=0 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.651 anon_hugepages=0 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108170172 kB' 'MemAvailable: 112762872 kB' 'Buffers: 9096 kB' 'Cached: 11261096 kB' 'SwapCached: 0 kB' 'Active: 7160912 kB' 'Inactive: 4667068 kB' 'Active(anon): 6769904 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561052 kB' 'Mapped: 182100 kB' 'Shmem: 6212116 kB' 'KReclaimable: 575568 kB' 'Slab: 1350524 kB' 'SReclaimable: 575568 kB' 'SUnreclaim: 774956 kB' 'KernelStack: 27408 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8294604 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237532 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.651 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.652 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59152024 kB' 'MemUsed: 6506984 kB' 'SwapCached: 0 kB' 'Active: 2252504 kB' 'Inactive: 1034540 kB' 'Active(anon): 2039168 kB' 'Inactive(anon): 0 kB' 'Active(file): 213336 kB' 'Inactive(file): 1034540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2847044 kB' 'Mapped: 74972 kB' 'AnonPages: 443152 kB' 'Shmem: 1599168 kB' 'KernelStack: 14664 kB' 'PageTables: 5596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195988 kB' 'Slab: 579004 kB' 'SReclaimable: 195988 kB' 'SUnreclaim: 383016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.653 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.654 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679852 kB' 'MemFree: 49019944 kB' 'MemUsed: 11659908 kB' 'SwapCached: 0 kB' 'Active: 4908720 kB' 'Inactive: 3632528 kB' 'Active(anon): 4731048 kB' 'Inactive(anon): 0 kB' 'Active(file): 177672 kB' 'Inactive(file): 3632528 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8423176 kB' 'Mapped: 107128 kB' 'AnonPages: 118200 kB' 'Shmem: 4612976 kB' 'KernelStack: 12712 kB' 'PageTables: 3504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 379580 kB' 'Slab: 771520 kB' 'SReclaimable: 379580 kB' 'SUnreclaim: 391940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.655 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:42.656 node0=512 expecting 512 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:42.656 node1=512 expecting 512 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:42.656 00:04:42.656 real 0m4.116s 00:04:42.656 user 0m1.653s 00:04:42.656 sys 0m2.529s 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.656 09:13:29 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:42.656 ************************************ 00:04:42.656 END TEST per_node_1G_alloc 00:04:42.656 ************************************ 00:04:42.656 09:13:29 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:42.656 09:13:29 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:42.656 09:13:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.656 09:13:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.656 09:13:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:42.656 ************************************ 00:04:42.656 START TEST even_2G_alloc 00:04:42.656 ************************************ 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.656 09:13:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:46.863 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:46.863 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:46.863 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:46.863 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:46.863 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:46.863 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:46.863 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:46.863 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:46.863 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:46.863 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:46.863 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:46.863 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:46.863 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:46.863 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:46.863 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:46.863 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:46.863 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108201592 kB' 'MemAvailable: 112794252 kB' 'Buffers: 9096 kB' 'Cached: 11261240 kB' 'SwapCached: 0 kB' 'Active: 7162140 kB' 'Inactive: 4667068 kB' 'Active(anon): 6771132 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561612 kB' 'Mapped: 182252 kB' 'Shmem: 6212260 kB' 'KReclaimable: 575528 kB' 'Slab: 1349988 kB' 'SReclaimable: 575528 kB' 'SUnreclaim: 774460 kB' 'KernelStack: 27216 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8292648 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237404 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.863 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.864 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108204100 kB' 'MemAvailable: 112796760 kB' 'Buffers: 9096 kB' 'Cached: 11261244 kB' 'SwapCached: 0 kB' 'Active: 7162024 kB' 'Inactive: 4667068 kB' 'Active(anon): 6771016 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561460 kB' 'Mapped: 182192 kB' 'Shmem: 6212264 kB' 'KReclaimable: 575528 kB' 'Slab: 1349988 kB' 'SReclaimable: 575528 kB' 'SUnreclaim: 774460 kB' 'KernelStack: 27200 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8292668 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237388 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.865 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108203944 kB' 'MemAvailable: 112796604 kB' 'Buffers: 9096 kB' 'Cached: 11261244 kB' 'SwapCached: 0 kB' 'Active: 7162024 kB' 'Inactive: 4667068 kB' 'Active(anon): 6771016 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561460 kB' 'Mapped: 182192 kB' 'Shmem: 6212264 kB' 'KReclaimable: 575528 kB' 'Slab: 1349988 kB' 'SReclaimable: 575528 kB' 'SUnreclaim: 774460 kB' 'KernelStack: 27200 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8292320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237404 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.868 nr_hugepages=1024 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.868 resv_hugepages=0 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.868 surplus_hugepages=0 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.868 anon_hugepages=0 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108201868 kB' 'MemAvailable: 112794528 kB' 'Buffers: 9096 kB' 'Cached: 11261284 kB' 'SwapCached: 0 kB' 'Active: 7161944 kB' 'Inactive: 4667068 kB' 'Active(anon): 6770936 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561940 kB' 'Mapped: 182680 kB' 'Shmem: 6212304 kB' 'KReclaimable: 575528 kB' 'Slab: 1349996 kB' 'SReclaimable: 575528 kB' 'SUnreclaim: 774468 kB' 'KernelStack: 27152 kB' 'PageTables: 8212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8293568 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237324 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59167276 kB' 'MemUsed: 6491732 kB' 'SwapCached: 0 kB' 'Active: 2252976 kB' 'Inactive: 1034540 kB' 'Active(anon): 2039640 kB' 'Inactive(anon): 0 kB' 'Active(file): 213336 kB' 'Inactive(file): 1034540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2847168 kB' 'Mapped: 74972 kB' 'AnonPages: 443568 kB' 'Shmem: 1599292 kB' 'KernelStack: 14632 kB' 'PageTables: 5476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195932 kB' 'Slab: 578788 kB' 'SReclaimable: 195932 kB' 'SUnreclaim: 382856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.870 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679852 kB' 'MemFree: 49032768 kB' 'MemUsed: 11647084 kB' 'SwapCached: 0 kB' 'Active: 4908596 kB' 'Inactive: 3632528 kB' 'Active(anon): 4730924 kB' 'Inactive(anon): 0 kB' 'Active(file): 177672 kB' 'Inactive(file): 3632528 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8423232 kB' 'Mapped: 107308 kB' 'AnonPages: 117980 kB' 'Shmem: 4613032 kB' 'KernelStack: 12552 kB' 'PageTables: 2812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 379580 kB' 'Slab: 771192 kB' 'SReclaimable: 379580 kB' 'SUnreclaim: 391612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:46.873 node0=512 expecting 512 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:46.873 node1=512 expecting 512 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:46.873 00:04:46.873 real 0m4.075s 00:04:46.873 user 0m1.636s 00:04:46.873 sys 0m2.509s 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.873 09:13:33 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:46.873 ************************************ 00:04:46.873 END TEST even_2G_alloc 00:04:46.873 ************************************ 00:04:46.873 09:13:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:46.873 09:13:33 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:46.873 09:13:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.873 09:13:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.873 09:13:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:46.873 ************************************ 00:04:46.873 START TEST odd_alloc 00:04:46.873 ************************************ 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.873 09:13:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:50.172 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:50.172 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:50.172 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:50.172 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:50.172 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:50.172 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:50.172 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:50.172 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:50.172 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:50.172 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:50.172 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:50.172 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:50.172 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:50.172 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:50.172 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:50.172 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:50.172 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108206848 kB' 'MemAvailable: 112799492 kB' 'Buffers: 9096 kB' 'Cached: 11261432 kB' 'SwapCached: 0 kB' 'Active: 7162984 kB' 'Inactive: 4667068 kB' 'Active(anon): 6771976 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562360 kB' 'Mapped: 182304 kB' 'Shmem: 6212452 kB' 'KReclaimable: 575512 kB' 'Slab: 1349596 kB' 'SReclaimable: 575512 kB' 'SUnreclaim: 774084 kB' 'KernelStack: 27232 kB' 'PageTables: 8144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508432 kB' 'Committed_AS: 8295084 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237324 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108206236 kB' 'MemAvailable: 112798880 kB' 'Buffers: 9096 kB' 'Cached: 11261436 kB' 'SwapCached: 0 kB' 'Active: 7164424 kB' 'Inactive: 4667068 kB' 'Active(anon): 6773416 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563920 kB' 'Mapped: 182252 kB' 'Shmem: 6212456 kB' 'KReclaimable: 575512 kB' 'Slab: 1349628 kB' 'SReclaimable: 575512 kB' 'SUnreclaim: 774116 kB' 'KernelStack: 27280 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508432 kB' 'Committed_AS: 8295104 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237404 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.439 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108208852 kB' 'MemAvailable: 112801496 kB' 'Buffers: 9096 kB' 'Cached: 11261452 kB' 'SwapCached: 0 kB' 'Active: 7162940 kB' 'Inactive: 4667068 kB' 'Active(anon): 6771932 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562732 kB' 'Mapped: 182144 kB' 'Shmem: 6212472 kB' 'KReclaimable: 575512 kB' 'Slab: 1349632 kB' 'SReclaimable: 575512 kB' 'SUnreclaim: 774120 kB' 'KernelStack: 27280 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508432 kB' 'Committed_AS: 8296836 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237340 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.440 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:50.441 nr_hugepages=1025 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:50.441 resv_hugepages=0 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:50.441 surplus_hugepages=0 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:50.441 anon_hugepages=0 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.441 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108208632 kB' 'MemAvailable: 112801276 kB' 'Buffers: 9096 kB' 'Cached: 11261472 kB' 'SwapCached: 0 kB' 'Active: 7163320 kB' 'Inactive: 4667068 kB' 'Active(anon): 6772312 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563072 kB' 'Mapped: 182136 kB' 'Shmem: 6212492 kB' 'KReclaimable: 575512 kB' 'Slab: 1349632 kB' 'SReclaimable: 575512 kB' 'SUnreclaim: 774120 kB' 'KernelStack: 27328 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508432 kB' 'Committed_AS: 8296856 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237452 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.442 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59176488 kB' 'MemUsed: 6482520 kB' 'SwapCached: 0 kB' 'Active: 2253928 kB' 'Inactive: 1034540 kB' 'Active(anon): 2040592 kB' 'Inactive(anon): 0 kB' 'Active(file): 213336 kB' 'Inactive(file): 1034540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2847280 kB' 'Mapped: 74972 kB' 'AnonPages: 444424 kB' 'Shmem: 1599404 kB' 'KernelStack: 14648 kB' 'PageTables: 5528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195932 kB' 'Slab: 578712 kB' 'SReclaimable: 195932 kB' 'SUnreclaim: 382780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679852 kB' 'MemFree: 49032476 kB' 'MemUsed: 11647376 kB' 'SwapCached: 0 kB' 'Active: 4909596 kB' 'Inactive: 3632528 kB' 'Active(anon): 4731924 kB' 'Inactive(anon): 0 kB' 'Active(file): 177672 kB' 'Inactive(file): 3632528 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8423288 kB' 'Mapped: 107164 kB' 'AnonPages: 118876 kB' 'Shmem: 4613088 kB' 'KernelStack: 12792 kB' 'PageTables: 3376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 379580 kB' 'Slab: 770920 kB' 'SReclaimable: 379580 kB' 'SUnreclaim: 391340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.443 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:50.444 node0=512 expecting 513 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:50.444 node1=513 expecting 512 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:50.444 00:04:50.444 real 0m4.035s 00:04:50.444 user 0m1.627s 00:04:50.444 sys 0m2.467s 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.444 09:13:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:50.444 ************************************ 00:04:50.444 END TEST odd_alloc 00:04:50.444 ************************************ 00:04:50.705 09:13:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:50.705 09:13:37 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:50.705 09:13:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.705 09:13:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.705 09:13:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:50.705 ************************************ 00:04:50.705 START TEST custom_alloc 00:04:50.705 ************************************ 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:50.705 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.706 09:13:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:54.918 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:54.918 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:54.918 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:54.918 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:54.918 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:54.918 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:54.918 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:54.918 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:54.918 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:54.918 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:54.918 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:54.918 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:54.918 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:54.918 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:54.918 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:54.918 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:54.918 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107155764 kB' 'MemAvailable: 111748392 kB' 'Buffers: 9096 kB' 'Cached: 11261608 kB' 'SwapCached: 0 kB' 'Active: 7163952 kB' 'Inactive: 4667068 kB' 'Active(anon): 6772944 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563532 kB' 'Mapped: 182228 kB' 'Shmem: 6212628 kB' 'KReclaimable: 575496 kB' 'Slab: 1349552 kB' 'SReclaimable: 575496 kB' 'SUnreclaim: 774056 kB' 'KernelStack: 27248 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985168 kB' 'Committed_AS: 8294560 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237436 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.918 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107155764 kB' 'MemAvailable: 111748392 kB' 'Buffers: 9096 kB' 'Cached: 11261612 kB' 'SwapCached: 0 kB' 'Active: 7163892 kB' 'Inactive: 4667068 kB' 'Active(anon): 6772884 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563508 kB' 'Mapped: 182168 kB' 'Shmem: 6212632 kB' 'KReclaimable: 575496 kB' 'Slab: 1349532 kB' 'SReclaimable: 575496 kB' 'SUnreclaim: 774036 kB' 'KernelStack: 27216 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985168 kB' 'Committed_AS: 8294580 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237436 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.919 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107159020 kB' 'MemAvailable: 111751648 kB' 'Buffers: 9096 kB' 'Cached: 11261628 kB' 'SwapCached: 0 kB' 'Active: 7163848 kB' 'Inactive: 4667068 kB' 'Active(anon): 6772840 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563436 kB' 'Mapped: 182168 kB' 'Shmem: 6212648 kB' 'KReclaimable: 575496 kB' 'Slab: 1349624 kB' 'SReclaimable: 575496 kB' 'SUnreclaim: 774128 kB' 'KernelStack: 27200 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985168 kB' 'Committed_AS: 8294600 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237420 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.920 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.921 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:54.922 nr_hugepages=1536 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.922 resv_hugepages=0 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.922 surplus_hugepages=0 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.922 anon_hugepages=0 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107159020 kB' 'MemAvailable: 111751648 kB' 'Buffers: 9096 kB' 'Cached: 11261652 kB' 'SwapCached: 0 kB' 'Active: 7163884 kB' 'Inactive: 4667068 kB' 'Active(anon): 6772876 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563444 kB' 'Mapped: 182168 kB' 'Shmem: 6212672 kB' 'KReclaimable: 575496 kB' 'Slab: 1349624 kB' 'SReclaimable: 575496 kB' 'SUnreclaim: 774128 kB' 'KernelStack: 27200 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985168 kB' 'Committed_AS: 8294620 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237436 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.922 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59165800 kB' 'MemUsed: 6493208 kB' 'SwapCached: 0 kB' 'Active: 2255580 kB' 'Inactive: 1034540 kB' 'Active(anon): 2042244 kB' 'Inactive(anon): 0 kB' 'Active(file): 213336 kB' 'Inactive(file): 1034540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2847424 kB' 'Mapped: 74972 kB' 'AnonPages: 445880 kB' 'Shmem: 1599548 kB' 'KernelStack: 14600 kB' 'PageTables: 5368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195916 kB' 'Slab: 578696 kB' 'SReclaimable: 195916 kB' 'SUnreclaim: 382780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.923 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679852 kB' 'MemFree: 47994228 kB' 'MemUsed: 12685624 kB' 'SwapCached: 0 kB' 'Active: 4908808 kB' 'Inactive: 3632528 kB' 'Active(anon): 4731136 kB' 'Inactive(anon): 0 kB' 'Active(file): 177672 kB' 'Inactive(file): 3632528 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8423324 kB' 'Mapped: 107196 kB' 'AnonPages: 118068 kB' 'Shmem: 4613124 kB' 'KernelStack: 12600 kB' 'PageTables: 2968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 379580 kB' 'Slab: 770928 kB' 'SReclaimable: 379580 kB' 'SUnreclaim: 391348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.924 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:54.925 node0=512 expecting 512 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:54.925 node1=1024 expecting 1024 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:54.925 00:04:54.925 real 0m4.082s 00:04:54.925 user 0m1.627s 00:04:54.925 sys 0m2.523s 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.925 09:13:41 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:54.925 ************************************ 00:04:54.925 END TEST custom_alloc 00:04:54.925 ************************************ 00:04:54.925 09:13:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:54.925 09:13:41 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:54.925 09:13:41 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.925 09:13:41 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.925 09:13:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:54.925 ************************************ 00:04:54.925 START TEST no_shrink_alloc 00:04:54.925 ************************************ 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.925 09:13:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.132 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:59.132 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:59.132 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:59.132 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:59.132 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:59.132 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:59.132 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:59.132 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:59.132 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:59.132 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:59.132 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:59.132 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:59.132 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:59.132 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:59.132 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:59.132 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:59.132 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108198328 kB' 'MemAvailable: 112790956 kB' 'Buffers: 9096 kB' 'Cached: 11261796 kB' 'SwapCached: 0 kB' 'Active: 7165112 kB' 'Inactive: 4667068 kB' 'Active(anon): 6774104 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564576 kB' 'Mapped: 182200 kB' 'Shmem: 6212816 kB' 'KReclaimable: 575496 kB' 'Slab: 1349436 kB' 'SReclaimable: 575496 kB' 'SUnreclaim: 773940 kB' 'KernelStack: 27232 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8296140 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237372 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.132 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:59.133 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108198960 kB' 'MemAvailable: 112791588 kB' 'Buffers: 9096 kB' 'Cached: 11261800 kB' 'SwapCached: 0 kB' 'Active: 7165392 kB' 'Inactive: 4667068 kB' 'Active(anon): 6774384 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564904 kB' 'Mapped: 182200 kB' 'Shmem: 6212820 kB' 'KReclaimable: 575496 kB' 'Slab: 1349428 kB' 'SReclaimable: 575496 kB' 'SUnreclaim: 773932 kB' 'KernelStack: 27200 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8296156 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237324 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.134 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.135 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108198708 kB' 'MemAvailable: 112791336 kB' 'Buffers: 9096 kB' 'Cached: 11261804 kB' 'SwapCached: 0 kB' 'Active: 7165104 kB' 'Inactive: 4667068 kB' 'Active(anon): 6774096 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564572 kB' 'Mapped: 182192 kB' 'Shmem: 6212824 kB' 'KReclaimable: 575496 kB' 'Slab: 1349492 kB' 'SReclaimable: 575496 kB' 'SUnreclaim: 773996 kB' 'KernelStack: 27216 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8296180 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237340 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.136 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.137 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:59.138 nr_hugepages=1024 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.138 resv_hugepages=0 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.138 surplus_hugepages=0 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.138 anon_hugepages=0 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108199252 kB' 'MemAvailable: 112791880 kB' 'Buffers: 9096 kB' 'Cached: 11261840 kB' 'SwapCached: 0 kB' 'Active: 7165140 kB' 'Inactive: 4667068 kB' 'Active(anon): 6774132 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564576 kB' 'Mapped: 182192 kB' 'Shmem: 6212860 kB' 'KReclaimable: 575496 kB' 'Slab: 1349492 kB' 'SReclaimable: 575496 kB' 'SUnreclaim: 773996 kB' 'KernelStack: 27216 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8296204 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237292 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.138 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:59.139 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58116876 kB' 'MemUsed: 7542132 kB' 'SwapCached: 0 kB' 'Active: 2253956 kB' 'Inactive: 1034540 kB' 'Active(anon): 2040620 kB' 'Inactive(anon): 0 kB' 'Active(file): 213336 kB' 'Inactive(file): 1034540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2847588 kB' 'Mapped: 74972 kB' 'AnonPages: 444052 kB' 'Shmem: 1599712 kB' 'KernelStack: 14616 kB' 'PageTables: 5364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195916 kB' 'Slab: 578600 kB' 'SReclaimable: 195916 kB' 'SUnreclaim: 382684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.140 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:59.141 node0=1024 expecting 1024 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.141 09:13:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:02.439 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:02.439 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:02.439 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:02.439 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:02.439 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:02.439 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:02.439 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:02.439 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:02.439 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:02.439 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:02.439 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:02.439 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:02.439 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:02.439 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:02.439 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:02.439 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:02.724 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:02.724 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108210712 kB' 'MemAvailable: 112803340 kB' 'Buffers: 9096 kB' 'Cached: 11261952 kB' 'SwapCached: 0 kB' 'Active: 7166588 kB' 'Inactive: 4667068 kB' 'Active(anon): 6775580 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565892 kB' 'Mapped: 182296 kB' 'Shmem: 6212972 kB' 'KReclaimable: 575496 kB' 'Slab: 1350432 kB' 'SReclaimable: 575496 kB' 'SUnreclaim: 774936 kB' 'KernelStack: 27296 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8300192 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237596 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.724 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108210228 kB' 'MemAvailable: 112802856 kB' 'Buffers: 9096 kB' 'Cached: 11261952 kB' 'SwapCached: 0 kB' 'Active: 7167640 kB' 'Inactive: 4667068 kB' 'Active(anon): 6776632 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566348 kB' 'Mapped: 182280 kB' 'Shmem: 6212972 kB' 'KReclaimable: 575496 kB' 'Slab: 1350680 kB' 'SReclaimable: 575496 kB' 'SUnreclaim: 775184 kB' 'KernelStack: 27328 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8300208 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237564 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108210548 kB' 'MemAvailable: 112803176 kB' 'Buffers: 9096 kB' 'Cached: 11261972 kB' 'SwapCached: 0 kB' 'Active: 7166244 kB' 'Inactive: 4667068 kB' 'Active(anon): 6775236 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565380 kB' 'Mapped: 182204 kB' 'Shmem: 6212992 kB' 'KReclaimable: 575496 kB' 'Slab: 1350716 kB' 'SReclaimable: 575496 kB' 'SUnreclaim: 775220 kB' 'KernelStack: 27216 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8298152 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237436 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:02.730 nr_hugepages=1024 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.730 resv_hugepages=0 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.730 surplus_hugepages=0 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.730 anon_hugepages=0 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108211312 kB' 'MemAvailable: 112803940 kB' 'Buffers: 9096 kB' 'Cached: 11261988 kB' 'SwapCached: 0 kB' 'Active: 7168364 kB' 'Inactive: 4667068 kB' 'Active(anon): 6777356 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4667068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567492 kB' 'Mapped: 182708 kB' 'Shmem: 6213008 kB' 'KReclaimable: 575496 kB' 'Slab: 1350812 kB' 'SReclaimable: 575496 kB' 'SUnreclaim: 775316 kB' 'KernelStack: 27136 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8300492 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237404 kB' 'VmallocChunk: 0 kB' 'Percpu: 162432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3356020 kB' 'DirectMap2M: 16246784 kB' 'DirectMap1G: 116391936 kB' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.730 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.731 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58117120 kB' 'MemUsed: 7541888 kB' 'SwapCached: 0 kB' 'Active: 2254336 kB' 'Inactive: 1034540 kB' 'Active(anon): 2041000 kB' 'Inactive(anon): 0 kB' 'Active(file): 213336 kB' 'Inactive(file): 1034540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2847672 kB' 'Mapped: 75124 kB' 'AnonPages: 444344 kB' 'Shmem: 1599796 kB' 'KernelStack: 14616 kB' 'PageTables: 5420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195916 kB' 'Slab: 579708 kB' 'SReclaimable: 195916 kB' 'SUnreclaim: 383792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.732 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.733 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:02.733 node0=1024 expecting 1024 00:05:02.734 09:13:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:02.734 00:05:02.734 real 0m8.030s 00:05:02.734 user 0m3.143s 00:05:02.734 sys 0m5.012s 00:05:02.734 09:13:49 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.734 09:13:49 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:02.734 ************************************ 00:05:02.734 END TEST no_shrink_alloc 00:05:02.734 ************************************ 00:05:02.734 09:13:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:02.734 09:13:49 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:02.734 09:13:49 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:02.734 09:13:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:02.734 09:13:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.734 09:13:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:02.734 09:13:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.734 09:13:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:02.994 09:13:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:02.994 09:13:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.994 09:13:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:02.994 09:13:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.994 09:13:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:02.994 09:13:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:02.994 09:13:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:02.994 00:05:02.994 real 0m28.851s 00:05:02.994 user 0m11.434s 00:05:02.994 sys 0m17.778s 00:05:02.994 09:13:49 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.994 09:13:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:02.994 ************************************ 00:05:02.994 END TEST hugepages 00:05:02.994 ************************************ 00:05:02.994 09:13:49 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:02.994 09:13:49 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:02.994 09:13:49 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.994 09:13:49 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.994 09:13:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:02.994 ************************************ 00:05:02.994 START TEST driver 00:05:02.994 ************************************ 00:05:02.994 09:13:50 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:02.994 * Looking for test storage... 00:05:02.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:02.994 09:13:50 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:02.994 09:13:50 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:02.994 09:13:50 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:08.276 09:13:55 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:08.276 09:13:55 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.276 09:13:55 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.276 09:13:55 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:08.276 ************************************ 00:05:08.276 START TEST guess_driver 00:05:08.276 ************************************ 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:08.276 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:08.276 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:08.276 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:08.276 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:08.276 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:08.276 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:08.276 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:08.276 Looking for driver=vfio-pci 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.276 09:13:55 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.501 09:13:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.501 09:13:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.501 09:13:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.501 09:13:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.501 09:13:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.501 09:13:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.501 09:13:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.501 09:13:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.501 09:13:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.501 09:13:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.501 09:13:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.501 09:13:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.501 09:13:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.501 09:13:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.501 09:13:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.501 09:13:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.501 09:13:59 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:12.501 09:13:59 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:12.501 09:13:59 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.501 09:13:59 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:17.775 00:05:17.775 real 0m9.011s 00:05:17.775 user 0m2.883s 00:05:17.775 sys 0m5.370s 00:05:17.775 09:14:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.775 09:14:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:17.775 ************************************ 00:05:17.775 END TEST guess_driver 00:05:17.775 ************************************ 00:05:17.775 09:14:04 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:17.775 00:05:17.775 real 0m14.232s 00:05:17.775 user 0m4.444s 00:05:17.775 sys 0m8.285s 00:05:17.775 09:14:04 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.775 09:14:04 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:17.775 ************************************ 00:05:17.775 END TEST driver 00:05:17.775 ************************************ 00:05:17.775 09:14:04 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:17.775 09:14:04 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:17.775 09:14:04 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.775 09:14:04 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.775 09:14:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:17.775 ************************************ 00:05:17.775 START TEST devices 00:05:17.775 ************************************ 00:05:17.775 09:14:04 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:17.775 * Looking for test storage... 00:05:17.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:17.775 09:14:04 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:17.775 09:14:04 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:17.775 09:14:04 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.775 09:14:04 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:21.970 09:14:08 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:21.970 09:14:08 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:21.970 09:14:08 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:21.970 09:14:08 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:21.970 09:14:08 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:21.970 09:14:08 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:21.970 09:14:08 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:21.970 09:14:08 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:21.970 09:14:08 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:21.970 09:14:08 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:21.970 09:14:08 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:21.970 09:14:08 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:21.970 09:14:08 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:21.970 09:14:08 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:21.970 09:14:08 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:21.970 09:14:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:21.970 09:14:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:21.970 09:14:08 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:05:21.970 09:14:08 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:05:21.970 09:14:08 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:21.970 09:14:08 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:21.970 09:14:08 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:21.970 No valid GPT data, bailing 00:05:21.970 09:14:08 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:21.970 09:14:08 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:21.970 09:14:08 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:21.970 09:14:08 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:21.970 09:14:08 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:21.970 09:14:08 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:21.970 09:14:08 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:05:21.970 09:14:08 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:05:21.970 09:14:08 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:21.970 09:14:08 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:05:21.970 09:14:08 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:21.970 09:14:08 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:21.970 09:14:08 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:21.970 09:14:08 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.970 09:14:08 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.970 09:14:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:21.970 ************************************ 00:05:21.970 START TEST nvme_mount 00:05:21.970 ************************************ 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:21.970 09:14:08 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:22.909 Creating new GPT entries in memory. 00:05:22.909 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:22.909 other utilities. 00:05:22.909 09:14:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:22.909 09:14:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:22.909 09:14:09 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:22.909 09:14:09 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:22.909 09:14:09 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:23.848 Creating new GPT entries in memory. 00:05:23.848 The operation has completed successfully. 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 449502 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.848 09:14:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:27.190 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.190 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.190 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.190 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.190 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.190 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.190 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.190 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.190 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.190 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.190 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.190 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.190 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.190 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.190 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.190 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:27.450 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:27.450 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:27.709 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:27.709 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:27.709 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:27.709 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:27.709 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:27.709 09:14:14 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:27.709 09:14:14 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.709 09:14:14 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:27.709 09:14:14 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:27.709 09:14:14 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.968 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:27.968 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:27.968 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:27.968 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.968 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:27.968 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:27.968 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:27.968 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:27.968 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:27.968 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.968 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:27.968 09:14:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:27.968 09:14:14 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.968 09:14:14 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:31.268 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.268 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.268 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.268 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.268 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.268 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.268 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.268 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.268 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.268 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.268 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.268 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.268 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.268 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.268 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.529 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:31.530 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.792 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:05:31.792 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:31.792 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:31.792 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:31.792 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:31.792 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:31.792 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:31.792 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:31.792 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.792 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:31.792 09:14:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:31.792 09:14:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.792 09:14:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:35.997 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:35.997 00:05:35.997 real 0m13.876s 00:05:35.997 user 0m4.417s 00:05:35.997 sys 0m7.354s 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.997 09:14:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:35.997 ************************************ 00:05:35.997 END TEST nvme_mount 00:05:35.997 ************************************ 00:05:35.997 09:14:22 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:35.997 09:14:22 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:35.997 09:14:22 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.997 09:14:22 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.997 09:14:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:35.997 ************************************ 00:05:35.997 START TEST dm_mount 00:05:35.997 ************************************ 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:35.997 09:14:22 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:36.567 Creating new GPT entries in memory. 00:05:36.567 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:36.567 other utilities. 00:05:36.567 09:14:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:36.567 09:14:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:36.567 09:14:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:36.567 09:14:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:36.567 09:14:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:37.952 Creating new GPT entries in memory. 00:05:37.952 The operation has completed successfully. 00:05:37.952 09:14:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:37.952 09:14:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:37.952 09:14:24 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:37.952 09:14:24 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:37.952 09:14:24 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:38.894 The operation has completed successfully. 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 455052 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.894 09:14:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:42.194 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:42.194 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.194 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:42.194 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.194 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:42.194 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.194 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:42.194 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.194 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:42.194 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.194 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:42.194 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.194 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:42.194 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.194 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:42.194 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:42.454 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:42.455 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:42.455 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:42.455 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:42.455 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:42.455 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:42.455 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:42.455 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:42.455 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:42.455 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:42.455 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:42.455 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:42.455 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:42.455 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:42.455 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:42.455 09:14:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.455 09:14:29 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.455 09:14:29 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:46.681 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:46.681 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.681 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:46.681 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.681 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:46.681 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.681 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:46.681 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.681 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:46.681 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.681 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:46.681 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.681 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:46.681 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.681 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:46.681 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.681 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:46.681 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:46.681 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:46.682 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:46.682 00:05:46.682 real 0m10.736s 00:05:46.682 user 0m2.826s 00:05:46.682 sys 0m4.987s 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.682 09:14:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:46.682 ************************************ 00:05:46.682 END TEST dm_mount 00:05:46.682 ************************************ 00:05:46.682 09:14:33 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:46.682 09:14:33 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:46.682 09:14:33 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:46.682 09:14:33 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:46.682 09:14:33 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:46.682 09:14:33 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:46.682 09:14:33 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:46.682 09:14:33 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:46.682 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:46.682 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:46.682 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:46.682 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:46.682 09:14:33 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:46.682 09:14:33 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:46.682 09:14:33 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:46.682 09:14:33 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:46.682 09:14:33 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:46.682 09:14:33 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:46.682 09:14:33 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:46.682 00:05:46.682 real 0m29.426s 00:05:46.682 user 0m8.923s 00:05:46.682 sys 0m15.357s 00:05:46.682 09:14:33 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.682 09:14:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:46.682 ************************************ 00:05:46.682 END TEST devices 00:05:46.682 ************************************ 00:05:46.682 09:14:33 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:46.682 00:05:46.682 real 1m39.669s 00:05:46.682 user 0m33.791s 00:05:46.682 sys 0m57.354s 00:05:46.682 09:14:33 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.682 09:14:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:46.682 ************************************ 00:05:46.682 END TEST setup.sh 00:05:46.682 ************************************ 00:05:46.682 09:14:33 -- common/autotest_common.sh@1142 -- # return 0 00:05:46.682 09:14:33 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:50.880 Hugepages 00:05:50.880 node hugesize free / total 00:05:50.880 node0 1048576kB 0 / 0 00:05:50.880 node0 2048kB 2048 / 2048 00:05:50.880 node1 1048576kB 0 / 0 00:05:50.880 node1 2048kB 0 / 0 00:05:50.880 00:05:50.880 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:50.880 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:50.880 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:50.880 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:50.880 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:50.880 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:50.880 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:50.880 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:50.880 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:50.880 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:50.880 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:50.880 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:50.880 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:50.880 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:50.880 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:50.880 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:50.880 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:50.880 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:50.880 09:14:37 -- spdk/autotest.sh@130 -- # uname -s 00:05:50.880 09:14:37 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:50.880 09:14:37 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:50.880 09:14:37 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:55.085 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:55.085 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:55.085 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:55.085 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:55.085 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:55.085 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:55.085 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:55.085 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:55.085 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:55.085 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:55.085 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:55.085 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:55.085 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:55.085 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:55.085 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:55.085 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:56.466 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:56.466 09:14:43 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:57.408 09:14:44 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:57.408 09:14:44 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:57.408 09:14:44 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:57.408 09:14:44 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:57.408 09:14:44 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:57.408 09:14:44 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:57.408 09:14:44 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:57.408 09:14:44 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:57.408 09:14:44 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:57.408 09:14:44 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:57.408 09:14:44 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:57.408 09:14:44 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:01.615 Waiting for block devices as requested 00:06:01.615 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:01.615 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:01.615 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:01.615 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:01.615 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:01.615 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:01.615 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:01.615 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:01.615 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:06:01.876 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:01.876 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:02.137 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:02.137 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:02.137 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:02.137 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:02.397 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:02.397 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:02.397 09:14:49 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:02.397 09:14:49 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:06:02.397 09:14:49 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:06:02.397 09:14:49 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:06:02.397 09:14:49 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:02.397 09:14:49 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:06:02.397 09:14:49 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:02.397 09:14:49 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:02.397 09:14:49 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:02.397 09:14:49 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:02.397 09:14:49 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:02.397 09:14:49 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:02.397 09:14:49 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:02.397 09:14:49 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:06:02.397 09:14:49 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:02.397 09:14:49 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:02.397 09:14:49 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:02.397 09:14:49 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:02.397 09:14:49 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:02.397 09:14:49 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:02.397 09:14:49 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:02.397 09:14:49 -- common/autotest_common.sh@1557 -- # continue 00:06:02.397 09:14:49 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:02.397 09:14:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:02.397 09:14:49 -- common/autotest_common.sh@10 -- # set +x 00:06:02.397 09:14:49 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:02.397 09:14:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.397 09:14:49 -- common/autotest_common.sh@10 -- # set +x 00:06:02.397 09:14:49 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:06.656 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:06.656 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:06.656 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:06.656 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:06.656 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:06.656 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:06.656 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:06.656 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:06.656 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:06.656 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:06.656 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:06.656 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:06.656 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:06.656 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:06.656 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:06.656 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:06.656 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:06.656 09:14:53 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:06.656 09:14:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:06.656 09:14:53 -- common/autotest_common.sh@10 -- # set +x 00:06:06.656 09:14:53 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:06.656 09:14:53 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:06.656 09:14:53 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:06.656 09:14:53 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:06.656 09:14:53 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:06.656 09:14:53 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:06.656 09:14:53 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:06.656 09:14:53 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:06.656 09:14:53 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:06.656 09:14:53 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:06.656 09:14:53 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:06.656 09:14:53 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:06.656 09:14:53 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:06:06.656 09:14:53 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:06.656 09:14:53 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:06:06.656 09:14:53 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:06:06.656 09:14:53 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:06:06.656 09:14:53 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:06:06.656 09:14:53 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:06:06.656 09:14:53 -- common/autotest_common.sh@1593 -- # return 0 00:06:06.656 09:14:53 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:06.656 09:14:53 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:06.656 09:14:53 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:06.656 09:14:53 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:06.656 09:14:53 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:06.656 09:14:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.656 09:14:53 -- common/autotest_common.sh@10 -- # set +x 00:06:06.656 09:14:53 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:06.656 09:14:53 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:06.656 09:14:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.656 09:14:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.656 09:14:53 -- common/autotest_common.sh@10 -- # set +x 00:06:06.656 ************************************ 00:06:06.656 START TEST env 00:06:06.656 ************************************ 00:06:06.656 09:14:53 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:06.656 * Looking for test storage... 00:06:06.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:06.656 09:14:53 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:06.656 09:14:53 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.656 09:14:53 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.656 09:14:53 env -- common/autotest_common.sh@10 -- # set +x 00:06:06.656 ************************************ 00:06:06.656 START TEST env_memory 00:06:06.656 ************************************ 00:06:06.656 09:14:53 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:06.656 00:06:06.656 00:06:06.656 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.656 http://cunit.sourceforge.net/ 00:06:06.656 00:06:06.656 00:06:06.656 Suite: memory 00:06:06.916 Test: alloc and free memory map ...[2024-07-15 09:14:53.868213] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:06.916 passed 00:06:06.916 Test: mem map translation ...[2024-07-15 09:14:53.896451] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:06.916 [2024-07-15 09:14:53.896486] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:06.916 [2024-07-15 09:14:53.896532] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:06.916 [2024-07-15 09:14:53.896540] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:06.916 passed 00:06:06.916 Test: mem map registration ...[2024-07-15 09:14:53.956825] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:06.916 [2024-07-15 09:14:53.956847] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:06.916 passed 00:06:06.916 Test: mem map adjacent registrations ...passed 00:06:06.916 00:06:06.916 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.916 suites 1 1 n/a 0 0 00:06:06.916 tests 4 4 4 0 0 00:06:06.916 asserts 152 152 152 0 n/a 00:06:06.916 00:06:06.917 Elapsed time = 0.203 seconds 00:06:06.917 00:06:06.917 real 0m0.218s 00:06:06.917 user 0m0.208s 00:06:06.917 sys 0m0.009s 00:06:06.917 09:14:54 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.917 09:14:54 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:06.917 ************************************ 00:06:06.917 END TEST env_memory 00:06:06.917 ************************************ 00:06:06.917 09:14:54 env -- common/autotest_common.sh@1142 -- # return 0 00:06:06.917 09:14:54 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:06.917 09:14:54 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.917 09:14:54 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.917 09:14:54 env -- common/autotest_common.sh@10 -- # set +x 00:06:06.917 ************************************ 00:06:06.917 START TEST env_vtophys 00:06:06.917 ************************************ 00:06:06.917 09:14:54 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:07.177 EAL: lib.eal log level changed from notice to debug 00:06:07.177 EAL: Detected lcore 0 as core 0 on socket 0 00:06:07.177 EAL: Detected lcore 1 as core 1 on socket 0 00:06:07.177 EAL: Detected lcore 2 as core 2 on socket 0 00:06:07.177 EAL: Detected lcore 3 as core 3 on socket 0 00:06:07.177 EAL: Detected lcore 4 as core 4 on socket 0 00:06:07.177 EAL: Detected lcore 5 as core 5 on socket 0 00:06:07.177 EAL: Detected lcore 6 as core 6 on socket 0 00:06:07.177 EAL: Detected lcore 7 as core 7 on socket 0 00:06:07.177 EAL: Detected lcore 8 as core 8 on socket 0 00:06:07.177 EAL: Detected lcore 9 as core 9 on socket 0 00:06:07.177 EAL: Detected lcore 10 as core 10 on socket 0 00:06:07.177 EAL: Detected lcore 11 as core 11 on socket 0 00:06:07.177 EAL: Detected lcore 12 as core 12 on socket 0 00:06:07.177 EAL: Detected lcore 13 as core 13 on socket 0 00:06:07.177 EAL: Detected lcore 14 as core 14 on socket 0 00:06:07.177 EAL: Detected lcore 15 as core 15 on socket 0 00:06:07.177 EAL: Detected lcore 16 as core 16 on socket 0 00:06:07.177 EAL: Detected lcore 17 as core 17 on socket 0 00:06:07.177 EAL: Detected lcore 18 as core 18 on socket 0 00:06:07.177 EAL: Detected lcore 19 as core 19 on socket 0 00:06:07.177 EAL: Detected lcore 20 as core 20 on socket 0 00:06:07.177 EAL: Detected lcore 21 as core 21 on socket 0 00:06:07.177 EAL: Detected lcore 22 as core 22 on socket 0 00:06:07.177 EAL: Detected lcore 23 as core 23 on socket 0 00:06:07.177 EAL: Detected lcore 24 as core 24 on socket 0 00:06:07.177 EAL: Detected lcore 25 as core 25 on socket 0 00:06:07.177 EAL: Detected lcore 26 as core 26 on socket 0 00:06:07.177 EAL: Detected lcore 27 as core 27 on socket 0 00:06:07.177 EAL: Detected lcore 28 as core 28 on socket 0 00:06:07.177 EAL: Detected lcore 29 as core 29 on socket 0 00:06:07.177 EAL: Detected lcore 30 as core 30 on socket 0 00:06:07.177 EAL: Detected lcore 31 as core 31 on socket 0 00:06:07.177 EAL: Detected lcore 32 as core 32 on socket 0 00:06:07.177 EAL: Detected lcore 33 as core 33 on socket 0 00:06:07.177 EAL: Detected lcore 34 as core 34 on socket 0 00:06:07.177 EAL: Detected lcore 35 as core 35 on socket 0 00:06:07.177 EAL: Detected lcore 36 as core 0 on socket 1 00:06:07.177 EAL: Detected lcore 37 as core 1 on socket 1 00:06:07.177 EAL: Detected lcore 38 as core 2 on socket 1 00:06:07.177 EAL: Detected lcore 39 as core 3 on socket 1 00:06:07.177 EAL: Detected lcore 40 as core 4 on socket 1 00:06:07.177 EAL: Detected lcore 41 as core 5 on socket 1 00:06:07.177 EAL: Detected lcore 42 as core 6 on socket 1 00:06:07.177 EAL: Detected lcore 43 as core 7 on socket 1 00:06:07.177 EAL: Detected lcore 44 as core 8 on socket 1 00:06:07.177 EAL: Detected lcore 45 as core 9 on socket 1 00:06:07.177 EAL: Detected lcore 46 as core 10 on socket 1 00:06:07.177 EAL: Detected lcore 47 as core 11 on socket 1 00:06:07.177 EAL: Detected lcore 48 as core 12 on socket 1 00:06:07.177 EAL: Detected lcore 49 as core 13 on socket 1 00:06:07.177 EAL: Detected lcore 50 as core 14 on socket 1 00:06:07.177 EAL: Detected lcore 51 as core 15 on socket 1 00:06:07.177 EAL: Detected lcore 52 as core 16 on socket 1 00:06:07.177 EAL: Detected lcore 53 as core 17 on socket 1 00:06:07.178 EAL: Detected lcore 54 as core 18 on socket 1 00:06:07.178 EAL: Detected lcore 55 as core 19 on socket 1 00:06:07.178 EAL: Detected lcore 56 as core 20 on socket 1 00:06:07.178 EAL: Detected lcore 57 as core 21 on socket 1 00:06:07.178 EAL: Detected lcore 58 as core 22 on socket 1 00:06:07.178 EAL: Detected lcore 59 as core 23 on socket 1 00:06:07.178 EAL: Detected lcore 60 as core 24 on socket 1 00:06:07.178 EAL: Detected lcore 61 as core 25 on socket 1 00:06:07.178 EAL: Detected lcore 62 as core 26 on socket 1 00:06:07.178 EAL: Detected lcore 63 as core 27 on socket 1 00:06:07.178 EAL: Detected lcore 64 as core 28 on socket 1 00:06:07.178 EAL: Detected lcore 65 as core 29 on socket 1 00:06:07.178 EAL: Detected lcore 66 as core 30 on socket 1 00:06:07.178 EAL: Detected lcore 67 as core 31 on socket 1 00:06:07.178 EAL: Detected lcore 68 as core 32 on socket 1 00:06:07.178 EAL: Detected lcore 69 as core 33 on socket 1 00:06:07.178 EAL: Detected lcore 70 as core 34 on socket 1 00:06:07.178 EAL: Detected lcore 71 as core 35 on socket 1 00:06:07.178 EAL: Detected lcore 72 as core 0 on socket 0 00:06:07.178 EAL: Detected lcore 73 as core 1 on socket 0 00:06:07.178 EAL: Detected lcore 74 as core 2 on socket 0 00:06:07.178 EAL: Detected lcore 75 as core 3 on socket 0 00:06:07.178 EAL: Detected lcore 76 as core 4 on socket 0 00:06:07.178 EAL: Detected lcore 77 as core 5 on socket 0 00:06:07.178 EAL: Detected lcore 78 as core 6 on socket 0 00:06:07.178 EAL: Detected lcore 79 as core 7 on socket 0 00:06:07.178 EAL: Detected lcore 80 as core 8 on socket 0 00:06:07.178 EAL: Detected lcore 81 as core 9 on socket 0 00:06:07.178 EAL: Detected lcore 82 as core 10 on socket 0 00:06:07.178 EAL: Detected lcore 83 as core 11 on socket 0 00:06:07.178 EAL: Detected lcore 84 as core 12 on socket 0 00:06:07.178 EAL: Detected lcore 85 as core 13 on socket 0 00:06:07.178 EAL: Detected lcore 86 as core 14 on socket 0 00:06:07.178 EAL: Detected lcore 87 as core 15 on socket 0 00:06:07.178 EAL: Detected lcore 88 as core 16 on socket 0 00:06:07.178 EAL: Detected lcore 89 as core 17 on socket 0 00:06:07.178 EAL: Detected lcore 90 as core 18 on socket 0 00:06:07.178 EAL: Detected lcore 91 as core 19 on socket 0 00:06:07.178 EAL: Detected lcore 92 as core 20 on socket 0 00:06:07.178 EAL: Detected lcore 93 as core 21 on socket 0 00:06:07.178 EAL: Detected lcore 94 as core 22 on socket 0 00:06:07.178 EAL: Detected lcore 95 as core 23 on socket 0 00:06:07.178 EAL: Detected lcore 96 as core 24 on socket 0 00:06:07.178 EAL: Detected lcore 97 as core 25 on socket 0 00:06:07.178 EAL: Detected lcore 98 as core 26 on socket 0 00:06:07.178 EAL: Detected lcore 99 as core 27 on socket 0 00:06:07.178 EAL: Detected lcore 100 as core 28 on socket 0 00:06:07.178 EAL: Detected lcore 101 as core 29 on socket 0 00:06:07.178 EAL: Detected lcore 102 as core 30 on socket 0 00:06:07.178 EAL: Detected lcore 103 as core 31 on socket 0 00:06:07.178 EAL: Detected lcore 104 as core 32 on socket 0 00:06:07.178 EAL: Detected lcore 105 as core 33 on socket 0 00:06:07.178 EAL: Detected lcore 106 as core 34 on socket 0 00:06:07.178 EAL: Detected lcore 107 as core 35 on socket 0 00:06:07.178 EAL: Detected lcore 108 as core 0 on socket 1 00:06:07.178 EAL: Detected lcore 109 as core 1 on socket 1 00:06:07.178 EAL: Detected lcore 110 as core 2 on socket 1 00:06:07.178 EAL: Detected lcore 111 as core 3 on socket 1 00:06:07.178 EAL: Detected lcore 112 as core 4 on socket 1 00:06:07.178 EAL: Detected lcore 113 as core 5 on socket 1 00:06:07.178 EAL: Detected lcore 114 as core 6 on socket 1 00:06:07.178 EAL: Detected lcore 115 as core 7 on socket 1 00:06:07.178 EAL: Detected lcore 116 as core 8 on socket 1 00:06:07.178 EAL: Detected lcore 117 as core 9 on socket 1 00:06:07.178 EAL: Detected lcore 118 as core 10 on socket 1 00:06:07.178 EAL: Detected lcore 119 as core 11 on socket 1 00:06:07.178 EAL: Detected lcore 120 as core 12 on socket 1 00:06:07.178 EAL: Detected lcore 121 as core 13 on socket 1 00:06:07.178 EAL: Detected lcore 122 as core 14 on socket 1 00:06:07.178 EAL: Detected lcore 123 as core 15 on socket 1 00:06:07.178 EAL: Detected lcore 124 as core 16 on socket 1 00:06:07.178 EAL: Detected lcore 125 as core 17 on socket 1 00:06:07.178 EAL: Detected lcore 126 as core 18 on socket 1 00:06:07.178 EAL: Detected lcore 127 as core 19 on socket 1 00:06:07.178 EAL: Skipped lcore 128 as core 20 on socket 1 00:06:07.178 EAL: Skipped lcore 129 as core 21 on socket 1 00:06:07.178 EAL: Skipped lcore 130 as core 22 on socket 1 00:06:07.178 EAL: Skipped lcore 131 as core 23 on socket 1 00:06:07.178 EAL: Skipped lcore 132 as core 24 on socket 1 00:06:07.178 EAL: Skipped lcore 133 as core 25 on socket 1 00:06:07.178 EAL: Skipped lcore 134 as core 26 on socket 1 00:06:07.178 EAL: Skipped lcore 135 as core 27 on socket 1 00:06:07.178 EAL: Skipped lcore 136 as core 28 on socket 1 00:06:07.178 EAL: Skipped lcore 137 as core 29 on socket 1 00:06:07.178 EAL: Skipped lcore 138 as core 30 on socket 1 00:06:07.178 EAL: Skipped lcore 139 as core 31 on socket 1 00:06:07.178 EAL: Skipped lcore 140 as core 32 on socket 1 00:06:07.178 EAL: Skipped lcore 141 as core 33 on socket 1 00:06:07.178 EAL: Skipped lcore 142 as core 34 on socket 1 00:06:07.178 EAL: Skipped lcore 143 as core 35 on socket 1 00:06:07.178 EAL: Maximum logical cores by configuration: 128 00:06:07.178 EAL: Detected CPU lcores: 128 00:06:07.178 EAL: Detected NUMA nodes: 2 00:06:07.178 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:07.178 EAL: Detected shared linkage of DPDK 00:06:07.178 EAL: No shared files mode enabled, IPC will be disabled 00:06:07.178 EAL: Bus pci wants IOVA as 'DC' 00:06:07.178 EAL: Buses did not request a specific IOVA mode. 00:06:07.178 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:07.178 EAL: Selected IOVA mode 'VA' 00:06:07.178 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.178 EAL: Probing VFIO support... 00:06:07.178 EAL: IOMMU type 1 (Type 1) is supported 00:06:07.178 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:07.178 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:07.178 EAL: VFIO support initialized 00:06:07.178 EAL: Ask a virtual area of 0x2e000 bytes 00:06:07.178 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:07.178 EAL: Setting up physically contiguous memory... 00:06:07.178 EAL: Setting maximum number of open files to 524288 00:06:07.178 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:07.178 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:07.178 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:07.178 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.178 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:07.178 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.178 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.178 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:07.178 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:07.178 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.178 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:07.178 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.178 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.178 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:07.178 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:07.178 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.178 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:07.178 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.178 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.178 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:07.178 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:07.178 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.178 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:07.178 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.178 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.178 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:07.178 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:07.178 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:07.178 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.178 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:07.178 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:07.178 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.178 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:07.178 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:07.178 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.178 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:07.178 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:07.178 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.178 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:07.178 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:07.178 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.178 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:07.178 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:07.178 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.178 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:07.178 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:07.178 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.178 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:07.178 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:07.178 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.178 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:07.178 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:07.178 EAL: Hugepages will be freed exactly as allocated. 00:06:07.178 EAL: No shared files mode enabled, IPC is disabled 00:06:07.178 EAL: No shared files mode enabled, IPC is disabled 00:06:07.178 EAL: TSC frequency is ~2400000 KHz 00:06:07.178 EAL: Main lcore 0 is ready (tid=7fc1c9670a00;cpuset=[0]) 00:06:07.178 EAL: Trying to obtain current memory policy. 00:06:07.178 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.178 EAL: Restoring previous memory policy: 0 00:06:07.178 EAL: request: mp_malloc_sync 00:06:07.178 EAL: No shared files mode enabled, IPC is disabled 00:06:07.178 EAL: Heap on socket 0 was expanded by 2MB 00:06:07.178 EAL: No shared files mode enabled, IPC is disabled 00:06:07.178 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:07.178 EAL: Mem event callback 'spdk:(nil)' registered 00:06:07.178 00:06:07.178 00:06:07.178 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.178 http://cunit.sourceforge.net/ 00:06:07.178 00:06:07.178 00:06:07.178 Suite: components_suite 00:06:07.178 Test: vtophys_malloc_test ...passed 00:06:07.178 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:07.178 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.178 EAL: Restoring previous memory policy: 4 00:06:07.178 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.178 EAL: request: mp_malloc_sync 00:06:07.178 EAL: No shared files mode enabled, IPC is disabled 00:06:07.178 EAL: Heap on socket 0 was expanded by 4MB 00:06:07.178 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.178 EAL: request: mp_malloc_sync 00:06:07.178 EAL: No shared files mode enabled, IPC is disabled 00:06:07.179 EAL: Heap on socket 0 was shrunk by 4MB 00:06:07.179 EAL: Trying to obtain current memory policy. 00:06:07.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.179 EAL: Restoring previous memory policy: 4 00:06:07.179 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.179 EAL: request: mp_malloc_sync 00:06:07.179 EAL: No shared files mode enabled, IPC is disabled 00:06:07.179 EAL: Heap on socket 0 was expanded by 6MB 00:06:07.179 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.179 EAL: request: mp_malloc_sync 00:06:07.179 EAL: No shared files mode enabled, IPC is disabled 00:06:07.179 EAL: Heap on socket 0 was shrunk by 6MB 00:06:07.179 EAL: Trying to obtain current memory policy. 00:06:07.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.179 EAL: Restoring previous memory policy: 4 00:06:07.179 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.179 EAL: request: mp_malloc_sync 00:06:07.179 EAL: No shared files mode enabled, IPC is disabled 00:06:07.179 EAL: Heap on socket 0 was expanded by 10MB 00:06:07.179 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.179 EAL: request: mp_malloc_sync 00:06:07.179 EAL: No shared files mode enabled, IPC is disabled 00:06:07.179 EAL: Heap on socket 0 was shrunk by 10MB 00:06:07.179 EAL: Trying to obtain current memory policy. 00:06:07.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.179 EAL: Restoring previous memory policy: 4 00:06:07.179 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.179 EAL: request: mp_malloc_sync 00:06:07.179 EAL: No shared files mode enabled, IPC is disabled 00:06:07.179 EAL: Heap on socket 0 was expanded by 18MB 00:06:07.179 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.179 EAL: request: mp_malloc_sync 00:06:07.179 EAL: No shared files mode enabled, IPC is disabled 00:06:07.179 EAL: Heap on socket 0 was shrunk by 18MB 00:06:07.179 EAL: Trying to obtain current memory policy. 00:06:07.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.179 EAL: Restoring previous memory policy: 4 00:06:07.179 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.179 EAL: request: mp_malloc_sync 00:06:07.179 EAL: No shared files mode enabled, IPC is disabled 00:06:07.179 EAL: Heap on socket 0 was expanded by 34MB 00:06:07.179 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.179 EAL: request: mp_malloc_sync 00:06:07.179 EAL: No shared files mode enabled, IPC is disabled 00:06:07.179 EAL: Heap on socket 0 was shrunk by 34MB 00:06:07.179 EAL: Trying to obtain current memory policy. 00:06:07.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.179 EAL: Restoring previous memory policy: 4 00:06:07.179 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.179 EAL: request: mp_malloc_sync 00:06:07.179 EAL: No shared files mode enabled, IPC is disabled 00:06:07.179 EAL: Heap on socket 0 was expanded by 66MB 00:06:07.179 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.179 EAL: request: mp_malloc_sync 00:06:07.179 EAL: No shared files mode enabled, IPC is disabled 00:06:07.179 EAL: Heap on socket 0 was shrunk by 66MB 00:06:07.179 EAL: Trying to obtain current memory policy. 00:06:07.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.179 EAL: Restoring previous memory policy: 4 00:06:07.179 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.179 EAL: request: mp_malloc_sync 00:06:07.179 EAL: No shared files mode enabled, IPC is disabled 00:06:07.179 EAL: Heap on socket 0 was expanded by 130MB 00:06:07.179 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.179 EAL: request: mp_malloc_sync 00:06:07.179 EAL: No shared files mode enabled, IPC is disabled 00:06:07.179 EAL: Heap on socket 0 was shrunk by 130MB 00:06:07.179 EAL: Trying to obtain current memory policy. 00:06:07.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.179 EAL: Restoring previous memory policy: 4 00:06:07.179 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.179 EAL: request: mp_malloc_sync 00:06:07.179 EAL: No shared files mode enabled, IPC is disabled 00:06:07.179 EAL: Heap on socket 0 was expanded by 258MB 00:06:07.179 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.179 EAL: request: mp_malloc_sync 00:06:07.179 EAL: No shared files mode enabled, IPC is disabled 00:06:07.179 EAL: Heap on socket 0 was shrunk by 258MB 00:06:07.179 EAL: Trying to obtain current memory policy. 00:06:07.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.439 EAL: Restoring previous memory policy: 4 00:06:07.439 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.439 EAL: request: mp_malloc_sync 00:06:07.439 EAL: No shared files mode enabled, IPC is disabled 00:06:07.439 EAL: Heap on socket 0 was expanded by 514MB 00:06:07.439 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.439 EAL: request: mp_malloc_sync 00:06:07.439 EAL: No shared files mode enabled, IPC is disabled 00:06:07.439 EAL: Heap on socket 0 was shrunk by 514MB 00:06:07.439 EAL: Trying to obtain current memory policy. 00:06:07.439 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.698 EAL: Restoring previous memory policy: 4 00:06:07.698 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.698 EAL: request: mp_malloc_sync 00:06:07.698 EAL: No shared files mode enabled, IPC is disabled 00:06:07.698 EAL: Heap on socket 0 was expanded by 1026MB 00:06:07.698 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.698 EAL: request: mp_malloc_sync 00:06:07.699 EAL: No shared files mode enabled, IPC is disabled 00:06:07.699 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:07.699 passed 00:06:07.699 00:06:07.699 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.699 suites 1 1 n/a 0 0 00:06:07.699 tests 2 2 2 0 0 00:06:07.699 asserts 497 497 497 0 n/a 00:06:07.699 00:06:07.699 Elapsed time = 0.641 seconds 00:06:07.699 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.699 EAL: request: mp_malloc_sync 00:06:07.699 EAL: No shared files mode enabled, IPC is disabled 00:06:07.699 EAL: Heap on socket 0 was shrunk by 2MB 00:06:07.699 EAL: No shared files mode enabled, IPC is disabled 00:06:07.699 EAL: No shared files mode enabled, IPC is disabled 00:06:07.699 EAL: No shared files mode enabled, IPC is disabled 00:06:07.699 00:06:07.699 real 0m0.769s 00:06:07.699 user 0m0.405s 00:06:07.699 sys 0m0.337s 00:06:07.699 09:14:54 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.699 09:14:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:07.699 ************************************ 00:06:07.699 END TEST env_vtophys 00:06:07.699 ************************************ 00:06:07.959 09:14:54 env -- common/autotest_common.sh@1142 -- # return 0 00:06:07.959 09:14:54 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:07.959 09:14:54 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.959 09:14:54 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.959 09:14:54 env -- common/autotest_common.sh@10 -- # set +x 00:06:07.959 ************************************ 00:06:07.959 START TEST env_pci 00:06:07.959 ************************************ 00:06:07.959 09:14:54 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:07.959 00:06:07.959 00:06:07.959 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.959 http://cunit.sourceforge.net/ 00:06:07.959 00:06:07.959 00:06:07.959 Suite: pci 00:06:07.959 Test: pci_hook ...[2024-07-15 09:14:54.977567] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 467165 has claimed it 00:06:07.959 EAL: Cannot find device (10000:00:01.0) 00:06:07.959 EAL: Failed to attach device on primary process 00:06:07.959 passed 00:06:07.959 00:06:07.959 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.959 suites 1 1 n/a 0 0 00:06:07.959 tests 1 1 1 0 0 00:06:07.959 asserts 25 25 25 0 n/a 00:06:07.959 00:06:07.959 Elapsed time = 0.032 seconds 00:06:07.959 00:06:07.959 real 0m0.053s 00:06:07.959 user 0m0.018s 00:06:07.959 sys 0m0.034s 00:06:07.959 09:14:55 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.959 09:14:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:07.959 ************************************ 00:06:07.959 END TEST env_pci 00:06:07.959 ************************************ 00:06:07.959 09:14:55 env -- common/autotest_common.sh@1142 -- # return 0 00:06:07.959 09:14:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:07.959 09:14:55 env -- env/env.sh@15 -- # uname 00:06:07.959 09:14:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:07.959 09:14:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:07.959 09:14:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:07.959 09:14:55 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:07.959 09:14:55 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.959 09:14:55 env -- common/autotest_common.sh@10 -- # set +x 00:06:07.959 ************************************ 00:06:07.959 START TEST env_dpdk_post_init 00:06:07.959 ************************************ 00:06:07.959 09:14:55 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:07.959 EAL: Detected CPU lcores: 128 00:06:07.959 EAL: Detected NUMA nodes: 2 00:06:07.959 EAL: Detected shared linkage of DPDK 00:06:07.959 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:07.959 EAL: Selected IOVA mode 'VA' 00:06:07.959 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.959 EAL: VFIO support initialized 00:06:07.959 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:08.220 EAL: Using IOMMU type 1 (Type 1) 00:06:08.220 EAL: Ignore mapping IO port bar(1) 00:06:08.481 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:06:08.481 EAL: Ignore mapping IO port bar(1) 00:06:08.482 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:06:08.742 EAL: Ignore mapping IO port bar(1) 00:06:08.742 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:06:09.003 EAL: Ignore mapping IO port bar(1) 00:06:09.003 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:06:09.264 EAL: Ignore mapping IO port bar(1) 00:06:09.264 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:06:09.264 EAL: Ignore mapping IO port bar(1) 00:06:09.525 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:06:09.525 EAL: Ignore mapping IO port bar(1) 00:06:09.786 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:06:09.786 EAL: Ignore mapping IO port bar(1) 00:06:10.047 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:06:10.047 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:06:10.352 EAL: Ignore mapping IO port bar(1) 00:06:10.352 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:06:10.613 EAL: Ignore mapping IO port bar(1) 00:06:10.613 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:06:10.613 EAL: Ignore mapping IO port bar(1) 00:06:10.874 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:06:10.874 EAL: Ignore mapping IO port bar(1) 00:06:11.134 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:06:11.134 EAL: Ignore mapping IO port bar(1) 00:06:11.395 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:06:11.395 EAL: Ignore mapping IO port bar(1) 00:06:11.395 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:06:11.656 EAL: Ignore mapping IO port bar(1) 00:06:11.656 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:06:11.917 EAL: Ignore mapping IO port bar(1) 00:06:11.917 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:06:11.917 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:06:11.917 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:06:12.178 Starting DPDK initialization... 00:06:12.178 Starting SPDK post initialization... 00:06:12.178 SPDK NVMe probe 00:06:12.178 Attaching to 0000:65:00.0 00:06:12.178 Attached to 0000:65:00.0 00:06:12.178 Cleaning up... 00:06:13.635 00:06:13.635 real 0m5.729s 00:06:13.635 user 0m0.194s 00:06:13.635 sys 0m0.075s 00:06:13.635 09:15:00 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.635 09:15:00 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:13.635 ************************************ 00:06:13.635 END TEST env_dpdk_post_init 00:06:13.635 ************************************ 00:06:13.895 09:15:00 env -- common/autotest_common.sh@1142 -- # return 0 00:06:13.895 09:15:00 env -- env/env.sh@26 -- # uname 00:06:13.895 09:15:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:13.895 09:15:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:13.895 09:15:00 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.895 09:15:00 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.895 09:15:00 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.895 ************************************ 00:06:13.895 START TEST env_mem_callbacks 00:06:13.895 ************************************ 00:06:13.895 09:15:00 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:13.895 EAL: Detected CPU lcores: 128 00:06:13.895 EAL: Detected NUMA nodes: 2 00:06:13.895 EAL: Detected shared linkage of DPDK 00:06:13.895 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:13.895 EAL: Selected IOVA mode 'VA' 00:06:13.895 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.895 EAL: VFIO support initialized 00:06:13.895 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:13.895 00:06:13.895 00:06:13.895 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.895 http://cunit.sourceforge.net/ 00:06:13.895 00:06:13.895 00:06:13.895 Suite: memory 00:06:13.896 Test: test ... 00:06:13.896 register 0x200000200000 2097152 00:06:13.896 malloc 3145728 00:06:13.896 register 0x200000400000 4194304 00:06:13.896 buf 0x200000500000 len 3145728 PASSED 00:06:13.896 malloc 64 00:06:13.896 buf 0x2000004fff40 len 64 PASSED 00:06:13.896 malloc 4194304 00:06:13.896 register 0x200000800000 6291456 00:06:13.896 buf 0x200000a00000 len 4194304 PASSED 00:06:13.896 free 0x200000500000 3145728 00:06:13.896 free 0x2000004fff40 64 00:06:13.896 unregister 0x200000400000 4194304 PASSED 00:06:13.896 free 0x200000a00000 4194304 00:06:13.896 unregister 0x200000800000 6291456 PASSED 00:06:13.896 malloc 8388608 00:06:13.896 register 0x200000400000 10485760 00:06:13.896 buf 0x200000600000 len 8388608 PASSED 00:06:13.896 free 0x200000600000 8388608 00:06:13.896 unregister 0x200000400000 10485760 PASSED 00:06:13.896 passed 00:06:13.896 00:06:13.896 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.896 suites 1 1 n/a 0 0 00:06:13.896 tests 1 1 1 0 0 00:06:13.896 asserts 15 15 15 0 n/a 00:06:13.896 00:06:13.896 Elapsed time = 0.005 seconds 00:06:13.896 00:06:13.896 real 0m0.061s 00:06:13.896 user 0m0.019s 00:06:13.896 sys 0m0.042s 00:06:13.896 09:15:00 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.896 09:15:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:13.896 ************************************ 00:06:13.896 END TEST env_mem_callbacks 00:06:13.896 ************************************ 00:06:13.896 09:15:01 env -- common/autotest_common.sh@1142 -- # return 0 00:06:13.896 00:06:13.896 real 0m7.328s 00:06:13.896 user 0m1.041s 00:06:13.896 sys 0m0.828s 00:06:13.896 09:15:01 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.896 09:15:01 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.896 ************************************ 00:06:13.896 END TEST env 00:06:13.896 ************************************ 00:06:13.896 09:15:01 -- common/autotest_common.sh@1142 -- # return 0 00:06:13.896 09:15:01 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:13.896 09:15:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.896 09:15:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.896 09:15:01 -- common/autotest_common.sh@10 -- # set +x 00:06:13.896 ************************************ 00:06:13.896 START TEST rpc 00:06:13.896 ************************************ 00:06:13.896 09:15:01 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:14.155 * Looking for test storage... 00:06:14.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:14.155 09:15:01 rpc -- rpc/rpc.sh@65 -- # spdk_pid=468677 00:06:14.155 09:15:01 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.155 09:15:01 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:14.155 09:15:01 rpc -- rpc/rpc.sh@67 -- # waitforlisten 468677 00:06:14.155 09:15:01 rpc -- common/autotest_common.sh@829 -- # '[' -z 468677 ']' 00:06:14.155 09:15:01 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.155 09:15:01 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.155 09:15:01 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.155 09:15:01 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.155 09:15:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.155 [2024-07-15 09:15:01.230476] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:14.155 [2024-07-15 09:15:01.230533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468677 ] 00:06:14.155 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.155 [2024-07-15 09:15:01.298076] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.416 [2024-07-15 09:15:01.363960] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:14.416 [2024-07-15 09:15:01.363999] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 468677' to capture a snapshot of events at runtime. 00:06:14.416 [2024-07-15 09:15:01.364006] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:14.416 [2024-07-15 09:15:01.364013] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:14.416 [2024-07-15 09:15:01.364018] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid468677 for offline analysis/debug. 00:06:14.416 [2024-07-15 09:15:01.364044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.986 09:15:01 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.986 09:15:01 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:14.986 09:15:01 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:14.986 09:15:01 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:14.986 09:15:01 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:14.986 09:15:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:14.986 09:15:02 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.986 09:15:02 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.986 09:15:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.986 ************************************ 00:06:14.986 START TEST rpc_integrity 00:06:14.986 ************************************ 00:06:14.986 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:14.986 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:14.986 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.986 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.986 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.986 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:14.986 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:14.986 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:14.986 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:14.986 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.986 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.986 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.986 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:14.986 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:14.986 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.986 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.986 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.986 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:14.986 { 00:06:14.986 "name": "Malloc0", 00:06:14.986 "aliases": [ 00:06:14.986 "8a9c2f17-cfa7-479c-9394-6eb221fa751b" 00:06:14.986 ], 00:06:14.986 "product_name": "Malloc disk", 00:06:14.986 "block_size": 512, 00:06:14.986 "num_blocks": 16384, 00:06:14.986 "uuid": "8a9c2f17-cfa7-479c-9394-6eb221fa751b", 00:06:14.986 "assigned_rate_limits": { 00:06:14.986 "rw_ios_per_sec": 0, 00:06:14.986 "rw_mbytes_per_sec": 0, 00:06:14.986 "r_mbytes_per_sec": 0, 00:06:14.986 "w_mbytes_per_sec": 0 00:06:14.986 }, 00:06:14.986 "claimed": false, 00:06:14.986 "zoned": false, 00:06:14.986 "supported_io_types": { 00:06:14.986 "read": true, 00:06:14.986 "write": true, 00:06:14.986 "unmap": true, 00:06:14.986 "flush": true, 00:06:14.986 "reset": true, 00:06:14.986 "nvme_admin": false, 00:06:14.986 "nvme_io": false, 00:06:14.986 "nvme_io_md": false, 00:06:14.986 "write_zeroes": true, 00:06:14.986 "zcopy": true, 00:06:14.986 "get_zone_info": false, 00:06:14.986 "zone_management": false, 00:06:14.986 "zone_append": false, 00:06:14.986 "compare": false, 00:06:14.986 "compare_and_write": false, 00:06:14.986 "abort": true, 00:06:14.986 "seek_hole": false, 00:06:14.986 "seek_data": false, 00:06:14.986 "copy": true, 00:06:14.986 "nvme_iov_md": false 00:06:14.986 }, 00:06:14.986 "memory_domains": [ 00:06:14.986 { 00:06:14.986 "dma_device_id": "system", 00:06:14.986 "dma_device_type": 1 00:06:14.986 }, 00:06:14.986 { 00:06:14.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:14.986 "dma_device_type": 2 00:06:14.986 } 00:06:14.986 ], 00:06:14.986 "driver_specific": {} 00:06:14.986 } 00:06:14.986 ]' 00:06:14.986 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:14.986 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:14.986 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:14.986 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.986 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.987 [2024-07-15 09:15:02.181029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:14.987 [2024-07-15 09:15:02.181061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:14.987 [2024-07-15 09:15:02.181073] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19c8a10 00:06:14.987 [2024-07-15 09:15:02.181081] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:14.987 [2024-07-15 09:15:02.182475] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:14.987 [2024-07-15 09:15:02.182495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:14.987 Passthru0 00:06:15.248 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.248 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:15.248 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.248 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.248 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.248 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:15.248 { 00:06:15.248 "name": "Malloc0", 00:06:15.248 "aliases": [ 00:06:15.248 "8a9c2f17-cfa7-479c-9394-6eb221fa751b" 00:06:15.248 ], 00:06:15.248 "product_name": "Malloc disk", 00:06:15.248 "block_size": 512, 00:06:15.248 "num_blocks": 16384, 00:06:15.248 "uuid": "8a9c2f17-cfa7-479c-9394-6eb221fa751b", 00:06:15.248 "assigned_rate_limits": { 00:06:15.248 "rw_ios_per_sec": 0, 00:06:15.248 "rw_mbytes_per_sec": 0, 00:06:15.248 "r_mbytes_per_sec": 0, 00:06:15.248 "w_mbytes_per_sec": 0 00:06:15.248 }, 00:06:15.248 "claimed": true, 00:06:15.248 "claim_type": "exclusive_write", 00:06:15.248 "zoned": false, 00:06:15.248 "supported_io_types": { 00:06:15.248 "read": true, 00:06:15.248 "write": true, 00:06:15.248 "unmap": true, 00:06:15.248 "flush": true, 00:06:15.248 "reset": true, 00:06:15.248 "nvme_admin": false, 00:06:15.248 "nvme_io": false, 00:06:15.248 "nvme_io_md": false, 00:06:15.248 "write_zeroes": true, 00:06:15.248 "zcopy": true, 00:06:15.248 "get_zone_info": false, 00:06:15.248 "zone_management": false, 00:06:15.248 "zone_append": false, 00:06:15.248 "compare": false, 00:06:15.248 "compare_and_write": false, 00:06:15.248 "abort": true, 00:06:15.248 "seek_hole": false, 00:06:15.248 "seek_data": false, 00:06:15.248 "copy": true, 00:06:15.248 "nvme_iov_md": false 00:06:15.248 }, 00:06:15.248 "memory_domains": [ 00:06:15.248 { 00:06:15.248 "dma_device_id": "system", 00:06:15.248 "dma_device_type": 1 00:06:15.248 }, 00:06:15.248 { 00:06:15.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.248 "dma_device_type": 2 00:06:15.248 } 00:06:15.248 ], 00:06:15.248 "driver_specific": {} 00:06:15.248 }, 00:06:15.248 { 00:06:15.248 "name": "Passthru0", 00:06:15.248 "aliases": [ 00:06:15.248 "43aea1b6-4c41-576d-9497-f8fc29f09e0e" 00:06:15.248 ], 00:06:15.248 "product_name": "passthru", 00:06:15.248 "block_size": 512, 00:06:15.248 "num_blocks": 16384, 00:06:15.248 "uuid": "43aea1b6-4c41-576d-9497-f8fc29f09e0e", 00:06:15.248 "assigned_rate_limits": { 00:06:15.248 "rw_ios_per_sec": 0, 00:06:15.248 "rw_mbytes_per_sec": 0, 00:06:15.248 "r_mbytes_per_sec": 0, 00:06:15.248 "w_mbytes_per_sec": 0 00:06:15.248 }, 00:06:15.248 "claimed": false, 00:06:15.248 "zoned": false, 00:06:15.248 "supported_io_types": { 00:06:15.248 "read": true, 00:06:15.248 "write": true, 00:06:15.248 "unmap": true, 00:06:15.248 "flush": true, 00:06:15.248 "reset": true, 00:06:15.248 "nvme_admin": false, 00:06:15.248 "nvme_io": false, 00:06:15.248 "nvme_io_md": false, 00:06:15.248 "write_zeroes": true, 00:06:15.248 "zcopy": true, 00:06:15.248 "get_zone_info": false, 00:06:15.248 "zone_management": false, 00:06:15.248 "zone_append": false, 00:06:15.248 "compare": false, 00:06:15.248 "compare_and_write": false, 00:06:15.248 "abort": true, 00:06:15.248 "seek_hole": false, 00:06:15.248 "seek_data": false, 00:06:15.248 "copy": true, 00:06:15.248 "nvme_iov_md": false 00:06:15.248 }, 00:06:15.248 "memory_domains": [ 00:06:15.248 { 00:06:15.248 "dma_device_id": "system", 00:06:15.248 "dma_device_type": 1 00:06:15.248 }, 00:06:15.248 { 00:06:15.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.248 "dma_device_type": 2 00:06:15.248 } 00:06:15.248 ], 00:06:15.248 "driver_specific": { 00:06:15.248 "passthru": { 00:06:15.248 "name": "Passthru0", 00:06:15.248 "base_bdev_name": "Malloc0" 00:06:15.248 } 00:06:15.248 } 00:06:15.248 } 00:06:15.248 ]' 00:06:15.248 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:15.248 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:15.248 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:15.248 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.248 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.248 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.248 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:15.248 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.248 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.248 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.248 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:15.248 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.248 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.248 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.248 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:15.248 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:15.248 09:15:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:15.248 00:06:15.248 real 0m0.290s 00:06:15.248 user 0m0.185s 00:06:15.248 sys 0m0.037s 00:06:15.248 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.248 09:15:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.248 ************************************ 00:06:15.248 END TEST rpc_integrity 00:06:15.248 ************************************ 00:06:15.248 09:15:02 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:15.248 09:15:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:15.248 09:15:02 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.248 09:15:02 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.248 09:15:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.248 ************************************ 00:06:15.248 START TEST rpc_plugins 00:06:15.248 ************************************ 00:06:15.248 09:15:02 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:15.248 09:15:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:15.248 09:15:02 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.248 09:15:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.248 09:15:02 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.248 09:15:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:15.248 09:15:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:15.248 09:15:02 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.248 09:15:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.248 09:15:02 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.248 09:15:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:15.248 { 00:06:15.248 "name": "Malloc1", 00:06:15.248 "aliases": [ 00:06:15.248 "d119762e-b2c1-4769-9ed9-b4c195e13b21" 00:06:15.248 ], 00:06:15.248 "product_name": "Malloc disk", 00:06:15.248 "block_size": 4096, 00:06:15.248 "num_blocks": 256, 00:06:15.248 "uuid": "d119762e-b2c1-4769-9ed9-b4c195e13b21", 00:06:15.248 "assigned_rate_limits": { 00:06:15.248 "rw_ios_per_sec": 0, 00:06:15.248 "rw_mbytes_per_sec": 0, 00:06:15.248 "r_mbytes_per_sec": 0, 00:06:15.248 "w_mbytes_per_sec": 0 00:06:15.248 }, 00:06:15.248 "claimed": false, 00:06:15.248 "zoned": false, 00:06:15.248 "supported_io_types": { 00:06:15.248 "read": true, 00:06:15.248 "write": true, 00:06:15.248 "unmap": true, 00:06:15.248 "flush": true, 00:06:15.248 "reset": true, 00:06:15.248 "nvme_admin": false, 00:06:15.248 "nvme_io": false, 00:06:15.248 "nvme_io_md": false, 00:06:15.248 "write_zeroes": true, 00:06:15.248 "zcopy": true, 00:06:15.248 "get_zone_info": false, 00:06:15.248 "zone_management": false, 00:06:15.248 "zone_append": false, 00:06:15.248 "compare": false, 00:06:15.248 "compare_and_write": false, 00:06:15.248 "abort": true, 00:06:15.248 "seek_hole": false, 00:06:15.248 "seek_data": false, 00:06:15.248 "copy": true, 00:06:15.248 "nvme_iov_md": false 00:06:15.248 }, 00:06:15.248 "memory_domains": [ 00:06:15.248 { 00:06:15.248 "dma_device_id": "system", 00:06:15.248 "dma_device_type": 1 00:06:15.248 }, 00:06:15.248 { 00:06:15.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.248 "dma_device_type": 2 00:06:15.248 } 00:06:15.248 ], 00:06:15.248 "driver_specific": {} 00:06:15.248 } 00:06:15.248 ]' 00:06:15.248 09:15:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:15.510 09:15:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:15.510 09:15:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:15.510 09:15:02 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.510 09:15:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.510 09:15:02 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.510 09:15:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:15.510 09:15:02 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.510 09:15:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.510 09:15:02 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.510 09:15:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:15.510 09:15:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:15.510 09:15:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:15.510 00:06:15.510 real 0m0.153s 00:06:15.510 user 0m0.092s 00:06:15.510 sys 0m0.023s 00:06:15.510 09:15:02 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.510 09:15:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.510 ************************************ 00:06:15.510 END TEST rpc_plugins 00:06:15.510 ************************************ 00:06:15.510 09:15:02 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:15.510 09:15:02 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:15.510 09:15:02 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.510 09:15:02 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.510 09:15:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.510 ************************************ 00:06:15.510 START TEST rpc_trace_cmd_test 00:06:15.510 ************************************ 00:06:15.510 09:15:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:15.510 09:15:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:15.510 09:15:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:15.510 09:15:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.510 09:15:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.510 09:15:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.510 09:15:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:15.510 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid468677", 00:06:15.510 "tpoint_group_mask": "0x8", 00:06:15.510 "iscsi_conn": { 00:06:15.510 "mask": "0x2", 00:06:15.510 "tpoint_mask": "0x0" 00:06:15.510 }, 00:06:15.510 "scsi": { 00:06:15.510 "mask": "0x4", 00:06:15.510 "tpoint_mask": "0x0" 00:06:15.510 }, 00:06:15.510 "bdev": { 00:06:15.510 "mask": "0x8", 00:06:15.510 "tpoint_mask": "0xffffffffffffffff" 00:06:15.510 }, 00:06:15.510 "nvmf_rdma": { 00:06:15.510 "mask": "0x10", 00:06:15.510 "tpoint_mask": "0x0" 00:06:15.510 }, 00:06:15.510 "nvmf_tcp": { 00:06:15.510 "mask": "0x20", 00:06:15.510 "tpoint_mask": "0x0" 00:06:15.510 }, 00:06:15.510 "ftl": { 00:06:15.510 "mask": "0x40", 00:06:15.510 "tpoint_mask": "0x0" 00:06:15.510 }, 00:06:15.510 "blobfs": { 00:06:15.510 "mask": "0x80", 00:06:15.510 "tpoint_mask": "0x0" 00:06:15.510 }, 00:06:15.510 "dsa": { 00:06:15.510 "mask": "0x200", 00:06:15.510 "tpoint_mask": "0x0" 00:06:15.510 }, 00:06:15.510 "thread": { 00:06:15.510 "mask": "0x400", 00:06:15.510 "tpoint_mask": "0x0" 00:06:15.510 }, 00:06:15.510 "nvme_pcie": { 00:06:15.510 "mask": "0x800", 00:06:15.510 "tpoint_mask": "0x0" 00:06:15.510 }, 00:06:15.510 "iaa": { 00:06:15.510 "mask": "0x1000", 00:06:15.510 "tpoint_mask": "0x0" 00:06:15.510 }, 00:06:15.510 "nvme_tcp": { 00:06:15.510 "mask": "0x2000", 00:06:15.510 "tpoint_mask": "0x0" 00:06:15.510 }, 00:06:15.510 "bdev_nvme": { 00:06:15.510 "mask": "0x4000", 00:06:15.510 "tpoint_mask": "0x0" 00:06:15.510 }, 00:06:15.510 "sock": { 00:06:15.510 "mask": "0x8000", 00:06:15.510 "tpoint_mask": "0x0" 00:06:15.510 } 00:06:15.510 }' 00:06:15.510 09:15:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:15.510 09:15:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:15.510 09:15:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:15.771 09:15:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:15.771 09:15:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:15.771 09:15:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:15.771 09:15:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:15.771 09:15:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:15.771 09:15:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:15.771 09:15:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:15.771 00:06:15.771 real 0m0.241s 00:06:15.771 user 0m0.211s 00:06:15.771 sys 0m0.023s 00:06:15.771 09:15:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.771 09:15:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.771 ************************************ 00:06:15.771 END TEST rpc_trace_cmd_test 00:06:15.771 ************************************ 00:06:15.771 09:15:02 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:15.771 09:15:02 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:15.771 09:15:02 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:15.771 09:15:02 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:15.771 09:15:02 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.771 09:15:02 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.771 09:15:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.771 ************************************ 00:06:15.771 START TEST rpc_daemon_integrity 00:06:15.771 ************************************ 00:06:15.771 09:15:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:15.771 09:15:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:15.771 09:15:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.771 09:15:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.771 09:15:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.771 09:15:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:15.771 09:15:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:16.032 09:15:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:16.032 09:15:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:16.032 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.032 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.032 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.032 09:15:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:16.032 09:15:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:16.032 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.032 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.032 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.032 09:15:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:16.032 { 00:06:16.032 "name": "Malloc2", 00:06:16.032 "aliases": [ 00:06:16.032 "c8dbac5c-752a-4811-8373-af8d9661c5f5" 00:06:16.032 ], 00:06:16.032 "product_name": "Malloc disk", 00:06:16.032 "block_size": 512, 00:06:16.032 "num_blocks": 16384, 00:06:16.032 "uuid": "c8dbac5c-752a-4811-8373-af8d9661c5f5", 00:06:16.032 "assigned_rate_limits": { 00:06:16.032 "rw_ios_per_sec": 0, 00:06:16.032 "rw_mbytes_per_sec": 0, 00:06:16.032 "r_mbytes_per_sec": 0, 00:06:16.032 "w_mbytes_per_sec": 0 00:06:16.032 }, 00:06:16.032 "claimed": false, 00:06:16.032 "zoned": false, 00:06:16.032 "supported_io_types": { 00:06:16.032 "read": true, 00:06:16.032 "write": true, 00:06:16.032 "unmap": true, 00:06:16.032 "flush": true, 00:06:16.032 "reset": true, 00:06:16.032 "nvme_admin": false, 00:06:16.032 "nvme_io": false, 00:06:16.032 "nvme_io_md": false, 00:06:16.032 "write_zeroes": true, 00:06:16.032 "zcopy": true, 00:06:16.032 "get_zone_info": false, 00:06:16.032 "zone_management": false, 00:06:16.032 "zone_append": false, 00:06:16.032 "compare": false, 00:06:16.032 "compare_and_write": false, 00:06:16.032 "abort": true, 00:06:16.032 "seek_hole": false, 00:06:16.032 "seek_data": false, 00:06:16.032 "copy": true, 00:06:16.032 "nvme_iov_md": false 00:06:16.032 }, 00:06:16.032 "memory_domains": [ 00:06:16.032 { 00:06:16.032 "dma_device_id": "system", 00:06:16.032 "dma_device_type": 1 00:06:16.032 }, 00:06:16.032 { 00:06:16.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:16.032 "dma_device_type": 2 00:06:16.032 } 00:06:16.032 ], 00:06:16.032 "driver_specific": {} 00:06:16.032 } 00:06:16.032 ]' 00:06:16.032 09:15:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:16.032 09:15:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:16.032 09:15:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:16.032 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.032 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.032 [2024-07-15 09:15:03.087510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:16.033 [2024-07-15 09:15:03.087539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:16.033 [2024-07-15 09:15:03.087553] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b5ffe0 00:06:16.033 [2024-07-15 09:15:03.087560] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:16.033 [2024-07-15 09:15:03.088835] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:16.033 [2024-07-15 09:15:03.088854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:16.033 Passthru0 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:16.033 { 00:06:16.033 "name": "Malloc2", 00:06:16.033 "aliases": [ 00:06:16.033 "c8dbac5c-752a-4811-8373-af8d9661c5f5" 00:06:16.033 ], 00:06:16.033 "product_name": "Malloc disk", 00:06:16.033 "block_size": 512, 00:06:16.033 "num_blocks": 16384, 00:06:16.033 "uuid": "c8dbac5c-752a-4811-8373-af8d9661c5f5", 00:06:16.033 "assigned_rate_limits": { 00:06:16.033 "rw_ios_per_sec": 0, 00:06:16.033 "rw_mbytes_per_sec": 0, 00:06:16.033 "r_mbytes_per_sec": 0, 00:06:16.033 "w_mbytes_per_sec": 0 00:06:16.033 }, 00:06:16.033 "claimed": true, 00:06:16.033 "claim_type": "exclusive_write", 00:06:16.033 "zoned": false, 00:06:16.033 "supported_io_types": { 00:06:16.033 "read": true, 00:06:16.033 "write": true, 00:06:16.033 "unmap": true, 00:06:16.033 "flush": true, 00:06:16.033 "reset": true, 00:06:16.033 "nvme_admin": false, 00:06:16.033 "nvme_io": false, 00:06:16.033 "nvme_io_md": false, 00:06:16.033 "write_zeroes": true, 00:06:16.033 "zcopy": true, 00:06:16.033 "get_zone_info": false, 00:06:16.033 "zone_management": false, 00:06:16.033 "zone_append": false, 00:06:16.033 "compare": false, 00:06:16.033 "compare_and_write": false, 00:06:16.033 "abort": true, 00:06:16.033 "seek_hole": false, 00:06:16.033 "seek_data": false, 00:06:16.033 "copy": true, 00:06:16.033 "nvme_iov_md": false 00:06:16.033 }, 00:06:16.033 "memory_domains": [ 00:06:16.033 { 00:06:16.033 "dma_device_id": "system", 00:06:16.033 "dma_device_type": 1 00:06:16.033 }, 00:06:16.033 { 00:06:16.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:16.033 "dma_device_type": 2 00:06:16.033 } 00:06:16.033 ], 00:06:16.033 "driver_specific": {} 00:06:16.033 }, 00:06:16.033 { 00:06:16.033 "name": "Passthru0", 00:06:16.033 "aliases": [ 00:06:16.033 "64cae9b8-3dad-5ae2-9267-e0d0932df745" 00:06:16.033 ], 00:06:16.033 "product_name": "passthru", 00:06:16.033 "block_size": 512, 00:06:16.033 "num_blocks": 16384, 00:06:16.033 "uuid": "64cae9b8-3dad-5ae2-9267-e0d0932df745", 00:06:16.033 "assigned_rate_limits": { 00:06:16.033 "rw_ios_per_sec": 0, 00:06:16.033 "rw_mbytes_per_sec": 0, 00:06:16.033 "r_mbytes_per_sec": 0, 00:06:16.033 "w_mbytes_per_sec": 0 00:06:16.033 }, 00:06:16.033 "claimed": false, 00:06:16.033 "zoned": false, 00:06:16.033 "supported_io_types": { 00:06:16.033 "read": true, 00:06:16.033 "write": true, 00:06:16.033 "unmap": true, 00:06:16.033 "flush": true, 00:06:16.033 "reset": true, 00:06:16.033 "nvme_admin": false, 00:06:16.033 "nvme_io": false, 00:06:16.033 "nvme_io_md": false, 00:06:16.033 "write_zeroes": true, 00:06:16.033 "zcopy": true, 00:06:16.033 "get_zone_info": false, 00:06:16.033 "zone_management": false, 00:06:16.033 "zone_append": false, 00:06:16.033 "compare": false, 00:06:16.033 "compare_and_write": false, 00:06:16.033 "abort": true, 00:06:16.033 "seek_hole": false, 00:06:16.033 "seek_data": false, 00:06:16.033 "copy": true, 00:06:16.033 "nvme_iov_md": false 00:06:16.033 }, 00:06:16.033 "memory_domains": [ 00:06:16.033 { 00:06:16.033 "dma_device_id": "system", 00:06:16.033 "dma_device_type": 1 00:06:16.033 }, 00:06:16.033 { 00:06:16.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:16.033 "dma_device_type": 2 00:06:16.033 } 00:06:16.033 ], 00:06:16.033 "driver_specific": { 00:06:16.033 "passthru": { 00:06:16.033 "name": "Passthru0", 00:06:16.033 "base_bdev_name": "Malloc2" 00:06:16.033 } 00:06:16.033 } 00:06:16.033 } 00:06:16.033 ]' 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:16.033 09:15:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:16.293 09:15:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:16.293 00:06:16.293 real 0m0.296s 00:06:16.293 user 0m0.193s 00:06:16.293 sys 0m0.040s 00:06:16.293 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.293 09:15:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.293 ************************************ 00:06:16.293 END TEST rpc_daemon_integrity 00:06:16.293 ************************************ 00:06:16.293 09:15:03 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:16.293 09:15:03 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:16.293 09:15:03 rpc -- rpc/rpc.sh@84 -- # killprocess 468677 00:06:16.293 09:15:03 rpc -- common/autotest_common.sh@948 -- # '[' -z 468677 ']' 00:06:16.293 09:15:03 rpc -- common/autotest_common.sh@952 -- # kill -0 468677 00:06:16.293 09:15:03 rpc -- common/autotest_common.sh@953 -- # uname 00:06:16.293 09:15:03 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.293 09:15:03 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 468677 00:06:16.293 09:15:03 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.293 09:15:03 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.293 09:15:03 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 468677' 00:06:16.293 killing process with pid 468677 00:06:16.293 09:15:03 rpc -- common/autotest_common.sh@967 -- # kill 468677 00:06:16.293 09:15:03 rpc -- common/autotest_common.sh@972 -- # wait 468677 00:06:16.553 00:06:16.553 real 0m2.462s 00:06:16.553 user 0m3.277s 00:06:16.553 sys 0m0.653s 00:06:16.553 09:15:03 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.553 09:15:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.553 ************************************ 00:06:16.553 END TEST rpc 00:06:16.553 ************************************ 00:06:16.553 09:15:03 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.553 09:15:03 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:16.553 09:15:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.553 09:15:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.553 09:15:03 -- common/autotest_common.sh@10 -- # set +x 00:06:16.553 ************************************ 00:06:16.553 START TEST skip_rpc 00:06:16.553 ************************************ 00:06:16.553 09:15:03 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:16.553 * Looking for test storage... 00:06:16.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:16.553 09:15:03 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:16.553 09:15:03 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:16.553 09:15:03 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:16.553 09:15:03 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.553 09:15:03 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.553 09:15:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.553 ************************************ 00:06:16.553 START TEST skip_rpc 00:06:16.553 ************************************ 00:06:16.553 09:15:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:16.553 09:15:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=469307 00:06:16.553 09:15:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.553 09:15:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:16.553 09:15:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:16.814 [2024-07-15 09:15:03.807267] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:16.814 [2024-07-15 09:15:03.807333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469307 ] 00:06:16.814 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.814 [2024-07-15 09:15:03.877791] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.814 [2024-07-15 09:15:03.954260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.104 09:15:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:22.104 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:22.104 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:22.104 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:22.104 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.104 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:22.104 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.104 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:22.104 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.105 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.105 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:22.105 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:22.105 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.105 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.105 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.105 09:15:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:22.105 09:15:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 469307 00:06:22.105 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 469307 ']' 00:06:22.105 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 469307 00:06:22.105 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:22.105 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.105 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 469307 00:06:22.105 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.105 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.105 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 469307' 00:06:22.105 killing process with pid 469307 00:06:22.105 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 469307 00:06:22.105 09:15:08 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 469307 00:06:22.105 00:06:22.105 real 0m5.277s 00:06:22.105 user 0m5.080s 00:06:22.105 sys 0m0.230s 00:06:22.105 09:15:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.105 09:15:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.105 ************************************ 00:06:22.105 END TEST skip_rpc 00:06:22.105 ************************************ 00:06:22.105 09:15:09 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:22.105 09:15:09 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:22.105 09:15:09 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.105 09:15:09 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.105 09:15:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.105 ************************************ 00:06:22.105 START TEST skip_rpc_with_json 00:06:22.105 ************************************ 00:06:22.105 09:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:22.105 09:15:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:22.105 09:15:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=470736 00:06:22.105 09:15:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.105 09:15:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 470736 00:06:22.105 09:15:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.105 09:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 470736 ']' 00:06:22.105 09:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.105 09:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.105 09:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.105 09:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.105 09:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:22.105 [2024-07-15 09:15:09.157123] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:22.105 [2024-07-15 09:15:09.157173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470736 ] 00:06:22.105 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.105 [2024-07-15 09:15:09.225865] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.105 [2024-07-15 09:15:09.295971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.049 09:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.049 09:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:23.049 09:15:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:23.049 09:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.049 09:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:23.049 [2024-07-15 09:15:09.915625] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:23.049 request: 00:06:23.049 { 00:06:23.049 "trtype": "tcp", 00:06:23.049 "method": "nvmf_get_transports", 00:06:23.049 "req_id": 1 00:06:23.049 } 00:06:23.049 Got JSON-RPC error response 00:06:23.049 response: 00:06:23.049 { 00:06:23.049 "code": -19, 00:06:23.049 "message": "No such device" 00:06:23.049 } 00:06:23.049 09:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:23.049 09:15:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:23.049 09:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.049 09:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:23.049 [2024-07-15 09:15:09.927746] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.049 09:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.049 09:15:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:23.049 09:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.049 09:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:23.049 09:15:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.049 09:15:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:23.049 { 00:06:23.049 "subsystems": [ 00:06:23.049 { 00:06:23.049 "subsystem": "vfio_user_target", 00:06:23.049 "config": null 00:06:23.049 }, 00:06:23.049 { 00:06:23.049 "subsystem": "keyring", 00:06:23.049 "config": [] 00:06:23.049 }, 00:06:23.049 { 00:06:23.049 "subsystem": "iobuf", 00:06:23.049 "config": [ 00:06:23.049 { 00:06:23.049 "method": "iobuf_set_options", 00:06:23.049 "params": { 00:06:23.049 "small_pool_count": 8192, 00:06:23.049 "large_pool_count": 1024, 00:06:23.049 "small_bufsize": 8192, 00:06:23.049 "large_bufsize": 135168 00:06:23.049 } 00:06:23.049 } 00:06:23.049 ] 00:06:23.049 }, 00:06:23.049 { 00:06:23.049 "subsystem": "sock", 00:06:23.049 "config": [ 00:06:23.049 { 00:06:23.049 "method": "sock_set_default_impl", 00:06:23.049 "params": { 00:06:23.049 "impl_name": "posix" 00:06:23.049 } 00:06:23.049 }, 00:06:23.049 { 00:06:23.049 "method": "sock_impl_set_options", 00:06:23.049 "params": { 00:06:23.049 "impl_name": "ssl", 00:06:23.049 "recv_buf_size": 4096, 00:06:23.049 "send_buf_size": 4096, 00:06:23.049 "enable_recv_pipe": true, 00:06:23.049 "enable_quickack": false, 00:06:23.049 "enable_placement_id": 0, 00:06:23.049 "enable_zerocopy_send_server": true, 00:06:23.049 "enable_zerocopy_send_client": false, 00:06:23.049 "zerocopy_threshold": 0, 00:06:23.049 "tls_version": 0, 00:06:23.049 "enable_ktls": false 00:06:23.049 } 00:06:23.049 }, 00:06:23.049 { 00:06:23.049 "method": "sock_impl_set_options", 00:06:23.049 "params": { 00:06:23.049 "impl_name": "posix", 00:06:23.049 "recv_buf_size": 2097152, 00:06:23.049 "send_buf_size": 2097152, 00:06:23.049 "enable_recv_pipe": true, 00:06:23.049 "enable_quickack": false, 00:06:23.049 "enable_placement_id": 0, 00:06:23.049 "enable_zerocopy_send_server": true, 00:06:23.049 "enable_zerocopy_send_client": false, 00:06:23.049 "zerocopy_threshold": 0, 00:06:23.049 "tls_version": 0, 00:06:23.049 "enable_ktls": false 00:06:23.049 } 00:06:23.049 } 00:06:23.049 ] 00:06:23.049 }, 00:06:23.049 { 00:06:23.049 "subsystem": "vmd", 00:06:23.049 "config": [] 00:06:23.049 }, 00:06:23.049 { 00:06:23.049 "subsystem": "accel", 00:06:23.049 "config": [ 00:06:23.049 { 00:06:23.049 "method": "accel_set_options", 00:06:23.049 "params": { 00:06:23.049 "small_cache_size": 128, 00:06:23.049 "large_cache_size": 16, 00:06:23.049 "task_count": 2048, 00:06:23.049 "sequence_count": 2048, 00:06:23.049 "buf_count": 2048 00:06:23.049 } 00:06:23.049 } 00:06:23.049 ] 00:06:23.049 }, 00:06:23.049 { 00:06:23.049 "subsystem": "bdev", 00:06:23.049 "config": [ 00:06:23.049 { 00:06:23.049 "method": "bdev_set_options", 00:06:23.049 "params": { 00:06:23.049 "bdev_io_pool_size": 65535, 00:06:23.049 "bdev_io_cache_size": 256, 00:06:23.049 "bdev_auto_examine": true, 00:06:23.049 "iobuf_small_cache_size": 128, 00:06:23.049 "iobuf_large_cache_size": 16 00:06:23.049 } 00:06:23.049 }, 00:06:23.049 { 00:06:23.049 "method": "bdev_raid_set_options", 00:06:23.049 "params": { 00:06:23.049 "process_window_size_kb": 1024 00:06:23.049 } 00:06:23.049 }, 00:06:23.049 { 00:06:23.049 "method": "bdev_iscsi_set_options", 00:06:23.049 "params": { 00:06:23.049 "timeout_sec": 30 00:06:23.049 } 00:06:23.049 }, 00:06:23.049 { 00:06:23.049 "method": "bdev_nvme_set_options", 00:06:23.049 "params": { 00:06:23.049 "action_on_timeout": "none", 00:06:23.049 "timeout_us": 0, 00:06:23.049 "timeout_admin_us": 0, 00:06:23.049 "keep_alive_timeout_ms": 10000, 00:06:23.049 "arbitration_burst": 0, 00:06:23.049 "low_priority_weight": 0, 00:06:23.049 "medium_priority_weight": 0, 00:06:23.049 "high_priority_weight": 0, 00:06:23.049 "nvme_adminq_poll_period_us": 10000, 00:06:23.049 "nvme_ioq_poll_period_us": 0, 00:06:23.049 "io_queue_requests": 0, 00:06:23.049 "delay_cmd_submit": true, 00:06:23.049 "transport_retry_count": 4, 00:06:23.049 "bdev_retry_count": 3, 00:06:23.049 "transport_ack_timeout": 0, 00:06:23.049 "ctrlr_loss_timeout_sec": 0, 00:06:23.049 "reconnect_delay_sec": 0, 00:06:23.049 "fast_io_fail_timeout_sec": 0, 00:06:23.049 "disable_auto_failback": false, 00:06:23.049 "generate_uuids": false, 00:06:23.049 "transport_tos": 0, 00:06:23.049 "nvme_error_stat": false, 00:06:23.049 "rdma_srq_size": 0, 00:06:23.049 "io_path_stat": false, 00:06:23.049 "allow_accel_sequence": false, 00:06:23.049 "rdma_max_cq_size": 0, 00:06:23.050 "rdma_cm_event_timeout_ms": 0, 00:06:23.050 "dhchap_digests": [ 00:06:23.050 "sha256", 00:06:23.050 "sha384", 00:06:23.050 "sha512" 00:06:23.050 ], 00:06:23.050 "dhchap_dhgroups": [ 00:06:23.050 "null", 00:06:23.050 "ffdhe2048", 00:06:23.050 "ffdhe3072", 00:06:23.050 "ffdhe4096", 00:06:23.050 "ffdhe6144", 00:06:23.050 "ffdhe8192" 00:06:23.050 ] 00:06:23.050 } 00:06:23.050 }, 00:06:23.050 { 00:06:23.050 "method": "bdev_nvme_set_hotplug", 00:06:23.050 "params": { 00:06:23.050 "period_us": 100000, 00:06:23.050 "enable": false 00:06:23.050 } 00:06:23.050 }, 00:06:23.050 { 00:06:23.050 "method": "bdev_wait_for_examine" 00:06:23.050 } 00:06:23.050 ] 00:06:23.050 }, 00:06:23.050 { 00:06:23.050 "subsystem": "scsi", 00:06:23.050 "config": null 00:06:23.050 }, 00:06:23.050 { 00:06:23.050 "subsystem": "scheduler", 00:06:23.050 "config": [ 00:06:23.050 { 00:06:23.050 "method": "framework_set_scheduler", 00:06:23.050 "params": { 00:06:23.050 "name": "static" 00:06:23.050 } 00:06:23.050 } 00:06:23.050 ] 00:06:23.050 }, 00:06:23.050 { 00:06:23.050 "subsystem": "vhost_scsi", 00:06:23.050 "config": [] 00:06:23.050 }, 00:06:23.050 { 00:06:23.050 "subsystem": "vhost_blk", 00:06:23.050 "config": [] 00:06:23.050 }, 00:06:23.050 { 00:06:23.050 "subsystem": "ublk", 00:06:23.050 "config": [] 00:06:23.050 }, 00:06:23.050 { 00:06:23.050 "subsystem": "nbd", 00:06:23.050 "config": [] 00:06:23.050 }, 00:06:23.050 { 00:06:23.050 "subsystem": "nvmf", 00:06:23.050 "config": [ 00:06:23.050 { 00:06:23.050 "method": "nvmf_set_config", 00:06:23.050 "params": { 00:06:23.050 "discovery_filter": "match_any", 00:06:23.050 "admin_cmd_passthru": { 00:06:23.050 "identify_ctrlr": false 00:06:23.050 } 00:06:23.050 } 00:06:23.050 }, 00:06:23.050 { 00:06:23.050 "method": "nvmf_set_max_subsystems", 00:06:23.050 "params": { 00:06:23.050 "max_subsystems": 1024 00:06:23.050 } 00:06:23.050 }, 00:06:23.050 { 00:06:23.050 "method": "nvmf_set_crdt", 00:06:23.050 "params": { 00:06:23.050 "crdt1": 0, 00:06:23.050 "crdt2": 0, 00:06:23.050 "crdt3": 0 00:06:23.050 } 00:06:23.050 }, 00:06:23.050 { 00:06:23.050 "method": "nvmf_create_transport", 00:06:23.050 "params": { 00:06:23.050 "trtype": "TCP", 00:06:23.050 "max_queue_depth": 128, 00:06:23.050 "max_io_qpairs_per_ctrlr": 127, 00:06:23.050 "in_capsule_data_size": 4096, 00:06:23.050 "max_io_size": 131072, 00:06:23.050 "io_unit_size": 131072, 00:06:23.050 "max_aq_depth": 128, 00:06:23.050 "num_shared_buffers": 511, 00:06:23.050 "buf_cache_size": 4294967295, 00:06:23.050 "dif_insert_or_strip": false, 00:06:23.050 "zcopy": false, 00:06:23.050 "c2h_success": true, 00:06:23.050 "sock_priority": 0, 00:06:23.050 "abort_timeout_sec": 1, 00:06:23.050 "ack_timeout": 0, 00:06:23.050 "data_wr_pool_size": 0 00:06:23.050 } 00:06:23.050 } 00:06:23.050 ] 00:06:23.050 }, 00:06:23.050 { 00:06:23.050 "subsystem": "iscsi", 00:06:23.050 "config": [ 00:06:23.050 { 00:06:23.050 "method": "iscsi_set_options", 00:06:23.050 "params": { 00:06:23.050 "node_base": "iqn.2016-06.io.spdk", 00:06:23.050 "max_sessions": 128, 00:06:23.050 "max_connections_per_session": 2, 00:06:23.050 "max_queue_depth": 64, 00:06:23.050 "default_time2wait": 2, 00:06:23.050 "default_time2retain": 20, 00:06:23.050 "first_burst_length": 8192, 00:06:23.050 "immediate_data": true, 00:06:23.050 "allow_duplicated_isid": false, 00:06:23.050 "error_recovery_level": 0, 00:06:23.050 "nop_timeout": 60, 00:06:23.050 "nop_in_interval": 30, 00:06:23.050 "disable_chap": false, 00:06:23.050 "require_chap": false, 00:06:23.050 "mutual_chap": false, 00:06:23.050 "chap_group": 0, 00:06:23.050 "max_large_datain_per_connection": 64, 00:06:23.050 "max_r2t_per_connection": 4, 00:06:23.050 "pdu_pool_size": 36864, 00:06:23.050 "immediate_data_pool_size": 16384, 00:06:23.050 "data_out_pool_size": 2048 00:06:23.050 } 00:06:23.050 } 00:06:23.050 ] 00:06:23.050 } 00:06:23.050 ] 00:06:23.050 } 00:06:23.050 09:15:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:23.050 09:15:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 470736 00:06:23.050 09:15:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 470736 ']' 00:06:23.050 09:15:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 470736 00:06:23.050 09:15:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:23.050 09:15:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:23.050 09:15:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 470736 00:06:23.050 09:15:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:23.050 09:15:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:23.050 09:15:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 470736' 00:06:23.050 killing process with pid 470736 00:06:23.050 09:15:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 470736 00:06:23.050 09:15:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 470736 00:06:23.311 09:15:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=471038 00:06:23.311 09:15:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:23.311 09:15:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 471038 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 471038 ']' 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 471038 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 471038 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 471038' 00:06:28.626 killing process with pid 471038 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 471038 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 471038 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:28.626 00:06:28.626 real 0m6.532s 00:06:28.626 user 0m6.402s 00:06:28.626 sys 0m0.534s 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:28.626 ************************************ 00:06:28.626 END TEST skip_rpc_with_json 00:06:28.626 ************************************ 00:06:28.626 09:15:15 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:28.626 09:15:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:28.626 09:15:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.626 09:15:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.626 09:15:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.626 ************************************ 00:06:28.626 START TEST skip_rpc_with_delay 00:06:28.626 ************************************ 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.626 09:15:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:28.627 09:15:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:28.627 [2024-07-15 09:15:15.766376] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:28.627 [2024-07-15 09:15:15.766468] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:28.627 09:15:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:28.627 09:15:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:28.627 09:15:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:28.627 09:15:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:28.627 00:06:28.627 real 0m0.073s 00:06:28.627 user 0m0.051s 00:06:28.627 sys 0m0.022s 00:06:28.627 09:15:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.627 09:15:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:28.627 ************************************ 00:06:28.627 END TEST skip_rpc_with_delay 00:06:28.627 ************************************ 00:06:28.627 09:15:15 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:28.627 09:15:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:28.627 09:15:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:28.627 09:15:15 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:28.627 09:15:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.627 09:15:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.627 09:15:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.886 ************************************ 00:06:28.887 START TEST exit_on_failed_rpc_init 00:06:28.887 ************************************ 00:06:28.887 09:15:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:28.887 09:15:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=472412 00:06:28.887 09:15:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 472412 00:06:28.887 09:15:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 472412 ']' 00:06:28.887 09:15:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.887 09:15:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.887 09:15:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.887 09:15:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.887 09:15:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.887 09:15:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:28.887 [2024-07-15 09:15:15.934236] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:28.887 [2024-07-15 09:15:15.934299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472412 ] 00:06:28.887 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.887 [2024-07-15 09:15:16.005389] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.887 [2024-07-15 09:15:16.080150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:29.826 [2024-07-15 09:15:16.725742] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:29.826 [2024-07-15 09:15:16.725794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472463 ] 00:06:29.826 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.826 [2024-07-15 09:15:16.807231] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.826 [2024-07-15 09:15:16.870962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.826 [2024-07-15 09:15:16.871023] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:29.826 [2024-07-15 09:15:16.871033] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:29.826 [2024-07-15 09:15:16.871039] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:29.826 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:29.827 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:29.827 09:15:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:29.827 09:15:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 472412 00:06:29.827 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 472412 ']' 00:06:29.827 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 472412 00:06:29.827 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:29.827 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.827 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 472412 00:06:29.827 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.827 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.827 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 472412' 00:06:29.827 killing process with pid 472412 00:06:29.827 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 472412 00:06:29.827 09:15:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 472412 00:06:30.087 00:06:30.087 real 0m1.329s 00:06:30.087 user 0m1.538s 00:06:30.087 sys 0m0.381s 00:06:30.087 09:15:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.087 09:15:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:30.087 ************************************ 00:06:30.087 END TEST exit_on_failed_rpc_init 00:06:30.087 ************************************ 00:06:30.087 09:15:17 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:30.087 09:15:17 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:30.087 00:06:30.087 real 0m13.619s 00:06:30.087 user 0m13.216s 00:06:30.087 sys 0m1.453s 00:06:30.087 09:15:17 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.087 09:15:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.087 ************************************ 00:06:30.087 END TEST skip_rpc 00:06:30.087 ************************************ 00:06:30.087 09:15:17 -- common/autotest_common.sh@1142 -- # return 0 00:06:30.087 09:15:17 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:30.087 09:15:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.087 09:15:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.087 09:15:17 -- common/autotest_common.sh@10 -- # set +x 00:06:30.348 ************************************ 00:06:30.348 START TEST rpc_client 00:06:30.348 ************************************ 00:06:30.349 09:15:17 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:30.349 * Looking for test storage... 00:06:30.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:30.349 09:15:17 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:30.349 OK 00:06:30.349 09:15:17 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:30.349 00:06:30.349 real 0m0.129s 00:06:30.349 user 0m0.056s 00:06:30.349 sys 0m0.081s 00:06:30.349 09:15:17 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.349 09:15:17 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:30.349 ************************************ 00:06:30.349 END TEST rpc_client 00:06:30.349 ************************************ 00:06:30.349 09:15:17 -- common/autotest_common.sh@1142 -- # return 0 00:06:30.349 09:15:17 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:30.349 09:15:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.349 09:15:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.349 09:15:17 -- common/autotest_common.sh@10 -- # set +x 00:06:30.349 ************************************ 00:06:30.349 START TEST json_config 00:06:30.349 ************************************ 00:06:30.349 09:15:17 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:30.609 09:15:17 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.609 09:15:17 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:30.609 09:15:17 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.609 09:15:17 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.609 09:15:17 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.609 09:15:17 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.609 09:15:17 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.609 09:15:17 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.609 09:15:17 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.609 09:15:17 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.609 09:15:17 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.609 09:15:17 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.609 09:15:17 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:30.609 09:15:17 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:30.609 09:15:17 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.609 09:15:17 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.609 09:15:17 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:30.609 09:15:17 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.609 09:15:17 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:30.609 09:15:17 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.609 09:15:17 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.609 09:15:17 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.610 09:15:17 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.610 09:15:17 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.610 09:15:17 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.610 09:15:17 json_config -- paths/export.sh@5 -- # export PATH 00:06:30.610 09:15:17 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.610 09:15:17 json_config -- nvmf/common.sh@47 -- # : 0 00:06:30.610 09:15:17 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:30.610 09:15:17 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:30.610 09:15:17 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.610 09:15:17 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.610 09:15:17 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.610 09:15:17 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:30.610 09:15:17 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:30.610 09:15:17 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:30.610 INFO: JSON configuration test init 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:30.610 09:15:17 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.610 09:15:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:30.610 09:15:17 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.610 09:15:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.610 09:15:17 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:30.610 09:15:17 json_config -- json_config/common.sh@9 -- # local app=target 00:06:30.610 09:15:17 json_config -- json_config/common.sh@10 -- # shift 00:06:30.610 09:15:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:30.610 09:15:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:30.610 09:15:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:30.610 09:15:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.610 09:15:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.610 09:15:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=472886 00:06:30.610 09:15:17 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:30.610 09:15:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:30.610 Waiting for target to run... 00:06:30.610 09:15:17 json_config -- json_config/common.sh@25 -- # waitforlisten 472886 /var/tmp/spdk_tgt.sock 00:06:30.610 09:15:17 json_config -- common/autotest_common.sh@829 -- # '[' -z 472886 ']' 00:06:30.610 09:15:17 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:30.610 09:15:17 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.610 09:15:17 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:30.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:30.610 09:15:17 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.610 09:15:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.610 [2024-07-15 09:15:17.668201] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:30.610 [2024-07-15 09:15:17.668257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472886 ] 00:06:30.610 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.870 [2024-07-15 09:15:17.885180] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.870 [2024-07-15 09:15:17.935245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.441 09:15:18 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.441 09:15:18 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:31.441 09:15:18 json_config -- json_config/common.sh@26 -- # echo '' 00:06:31.441 00:06:31.441 09:15:18 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:31.441 09:15:18 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:31.441 09:15:18 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:31.441 09:15:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.441 09:15:18 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:31.441 09:15:18 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:31.441 09:15:18 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:31.441 09:15:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.441 09:15:18 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:31.441 09:15:18 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:31.441 09:15:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:32.011 09:15:19 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:32.011 09:15:19 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:32.011 09:15:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:32.011 09:15:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.011 09:15:19 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:32.011 09:15:19 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:32.011 09:15:19 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:32.011 09:15:19 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:32.011 09:15:19 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:32.011 09:15:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:32.011 09:15:19 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:32.011 09:15:19 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:32.011 09:15:19 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:32.011 09:15:19 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:32.011 09:15:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:32.011 09:15:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.271 09:15:19 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:32.271 09:15:19 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:32.271 09:15:19 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:32.271 09:15:19 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:32.271 09:15:19 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:32.271 09:15:19 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:32.271 09:15:19 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:32.271 09:15:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:32.271 09:15:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.271 09:15:19 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:32.271 09:15:19 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:32.271 09:15:19 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:32.271 09:15:19 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:32.271 09:15:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:32.271 MallocForNvmf0 00:06:32.271 09:15:19 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:32.271 09:15:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:32.531 MallocForNvmf1 00:06:32.531 09:15:19 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:32.531 09:15:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:32.531 [2024-07-15 09:15:19.703420] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.792 09:15:19 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:32.792 09:15:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:32.792 09:15:19 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:32.792 09:15:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:33.053 09:15:20 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:33.053 09:15:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:33.053 09:15:20 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:33.053 09:15:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:33.314 [2024-07-15 09:15:20.357636] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:33.314 09:15:20 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:33.314 09:15:20 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:33.314 09:15:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.314 09:15:20 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:33.314 09:15:20 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:33.314 09:15:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.314 09:15:20 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:33.314 09:15:20 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:33.314 09:15:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:33.575 MallocBdevForConfigChangeCheck 00:06:33.575 09:15:20 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:33.575 09:15:20 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:33.575 09:15:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.575 09:15:20 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:33.575 09:15:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:33.835 09:15:20 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:33.835 INFO: shutting down applications... 00:06:33.835 09:15:20 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:33.835 09:15:20 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:33.835 09:15:20 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:33.835 09:15:20 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:34.406 Calling clear_iscsi_subsystem 00:06:34.406 Calling clear_nvmf_subsystem 00:06:34.406 Calling clear_nbd_subsystem 00:06:34.406 Calling clear_ublk_subsystem 00:06:34.406 Calling clear_vhost_blk_subsystem 00:06:34.406 Calling clear_vhost_scsi_subsystem 00:06:34.406 Calling clear_bdev_subsystem 00:06:34.406 09:15:21 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:34.406 09:15:21 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:34.406 09:15:21 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:34.406 09:15:21 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:34.406 09:15:21 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:34.406 09:15:21 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:34.667 09:15:21 json_config -- json_config/json_config.sh@345 -- # break 00:06:34.667 09:15:21 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:34.667 09:15:21 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:34.667 09:15:21 json_config -- json_config/common.sh@31 -- # local app=target 00:06:34.667 09:15:21 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:34.667 09:15:21 json_config -- json_config/common.sh@35 -- # [[ -n 472886 ]] 00:06:34.667 09:15:21 json_config -- json_config/common.sh@38 -- # kill -SIGINT 472886 00:06:34.667 09:15:21 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:34.667 09:15:21 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.667 09:15:21 json_config -- json_config/common.sh@41 -- # kill -0 472886 00:06:34.667 09:15:21 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:35.236 09:15:22 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:35.236 09:15:22 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:35.236 09:15:22 json_config -- json_config/common.sh@41 -- # kill -0 472886 00:06:35.236 09:15:22 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:35.236 09:15:22 json_config -- json_config/common.sh@43 -- # break 00:06:35.236 09:15:22 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:35.236 09:15:22 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:35.236 SPDK target shutdown done 00:06:35.236 09:15:22 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:35.236 INFO: relaunching applications... 00:06:35.236 09:15:22 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:35.236 09:15:22 json_config -- json_config/common.sh@9 -- # local app=target 00:06:35.236 09:15:22 json_config -- json_config/common.sh@10 -- # shift 00:06:35.236 09:15:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:35.236 09:15:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:35.236 09:15:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:35.236 09:15:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.236 09:15:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.236 09:15:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=473786 00:06:35.236 09:15:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:35.236 Waiting for target to run... 00:06:35.236 09:15:22 json_config -- json_config/common.sh@25 -- # waitforlisten 473786 /var/tmp/spdk_tgt.sock 00:06:35.236 09:15:22 json_config -- common/autotest_common.sh@829 -- # '[' -z 473786 ']' 00:06:35.236 09:15:22 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:35.236 09:15:22 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:35.236 09:15:22 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.236 09:15:22 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:35.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:35.236 09:15:22 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.236 09:15:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.236 [2024-07-15 09:15:22.241302] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:35.236 [2024-07-15 09:15:22.241358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473786 ] 00:06:35.236 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.496 [2024-07-15 09:15:22.482468] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.496 [2024-07-15 09:15:22.536938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.065 [2024-07-15 09:15:23.041039] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.065 [2024-07-15 09:15:23.073394] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:36.065 09:15:23 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.065 09:15:23 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:36.065 09:15:23 json_config -- json_config/common.sh@26 -- # echo '' 00:06:36.065 00:06:36.065 09:15:23 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:36.065 09:15:23 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:36.065 INFO: Checking if target configuration is the same... 00:06:36.065 09:15:23 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:36.065 09:15:23 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:36.065 09:15:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:36.066 + '[' 2 -ne 2 ']' 00:06:36.066 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:36.066 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:36.066 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:36.066 +++ basename /dev/fd/62 00:06:36.066 ++ mktemp /tmp/62.XXX 00:06:36.066 + tmp_file_1=/tmp/62.uAN 00:06:36.066 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:36.066 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:36.066 + tmp_file_2=/tmp/spdk_tgt_config.json.Qrm 00:06:36.066 + ret=0 00:06:36.066 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:36.325 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:36.325 + diff -u /tmp/62.uAN /tmp/spdk_tgt_config.json.Qrm 00:06:36.325 + echo 'INFO: JSON config files are the same' 00:06:36.325 INFO: JSON config files are the same 00:06:36.325 + rm /tmp/62.uAN /tmp/spdk_tgt_config.json.Qrm 00:06:36.325 + exit 0 00:06:36.325 09:15:23 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:36.325 09:15:23 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:36.325 INFO: changing configuration and checking if this can be detected... 00:06:36.325 09:15:23 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:36.325 09:15:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:36.586 09:15:23 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:36.586 09:15:23 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:36.586 09:15:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:36.586 + '[' 2 -ne 2 ']' 00:06:36.586 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:36.586 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:36.586 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:36.586 +++ basename /dev/fd/62 00:06:36.586 ++ mktemp /tmp/62.XXX 00:06:36.586 + tmp_file_1=/tmp/62.cfr 00:06:36.586 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:36.586 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:36.586 + tmp_file_2=/tmp/spdk_tgt_config.json.Fet 00:06:36.586 + ret=0 00:06:36.586 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:36.846 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:36.846 + diff -u /tmp/62.cfr /tmp/spdk_tgt_config.json.Fet 00:06:36.846 + ret=1 00:06:36.846 + echo '=== Start of file: /tmp/62.cfr ===' 00:06:36.846 + cat /tmp/62.cfr 00:06:36.846 + echo '=== End of file: /tmp/62.cfr ===' 00:06:36.846 + echo '' 00:06:36.846 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Fet ===' 00:06:36.846 + cat /tmp/spdk_tgt_config.json.Fet 00:06:36.846 + echo '=== End of file: /tmp/spdk_tgt_config.json.Fet ===' 00:06:36.846 + echo '' 00:06:36.846 + rm /tmp/62.cfr /tmp/spdk_tgt_config.json.Fet 00:06:36.846 + exit 1 00:06:36.846 09:15:23 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:36.846 INFO: configuration change detected. 00:06:36.846 09:15:23 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:36.846 09:15:23 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:36.846 09:15:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:36.846 09:15:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.846 09:15:23 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:36.846 09:15:23 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:36.846 09:15:23 json_config -- json_config/json_config.sh@317 -- # [[ -n 473786 ]] 00:06:36.846 09:15:23 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:36.846 09:15:23 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:36.846 09:15:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:36.846 09:15:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.846 09:15:24 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:36.846 09:15:24 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:36.846 09:15:24 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:36.846 09:15:24 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:36.846 09:15:24 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:36.846 09:15:24 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:36.846 09:15:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:36.846 09:15:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.106 09:15:24 json_config -- json_config/json_config.sh@323 -- # killprocess 473786 00:06:37.106 09:15:24 json_config -- common/autotest_common.sh@948 -- # '[' -z 473786 ']' 00:06:37.106 09:15:24 json_config -- common/autotest_common.sh@952 -- # kill -0 473786 00:06:37.106 09:15:24 json_config -- common/autotest_common.sh@953 -- # uname 00:06:37.106 09:15:24 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.106 09:15:24 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 473786 00:06:37.106 09:15:24 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.106 09:15:24 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.106 09:15:24 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 473786' 00:06:37.106 killing process with pid 473786 00:06:37.106 09:15:24 json_config -- common/autotest_common.sh@967 -- # kill 473786 00:06:37.106 09:15:24 json_config -- common/autotest_common.sh@972 -- # wait 473786 00:06:37.367 09:15:24 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.367 09:15:24 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:37.367 09:15:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:37.367 09:15:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.367 09:15:24 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:37.367 09:15:24 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:37.367 INFO: Success 00:06:37.367 00:06:37.367 real 0m6.919s 00:06:37.367 user 0m8.489s 00:06:37.367 sys 0m1.589s 00:06:37.367 09:15:24 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.367 09:15:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.367 ************************************ 00:06:37.367 END TEST json_config 00:06:37.367 ************************************ 00:06:37.367 09:15:24 -- common/autotest_common.sh@1142 -- # return 0 00:06:37.367 09:15:24 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:37.367 09:15:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.367 09:15:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.367 09:15:24 -- common/autotest_common.sh@10 -- # set +x 00:06:37.367 ************************************ 00:06:37.367 START TEST json_config_extra_key 00:06:37.367 ************************************ 00:06:37.367 09:15:24 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:37.628 09:15:24 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.628 09:15:24 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:37.628 09:15:24 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.628 09:15:24 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.628 09:15:24 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.628 09:15:24 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.628 09:15:24 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.628 09:15:24 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.628 09:15:24 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.628 09:15:24 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.628 09:15:24 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.628 09:15:24 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.628 09:15:24 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:37.628 09:15:24 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:37.628 09:15:24 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.628 09:15:24 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.628 09:15:24 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:37.628 09:15:24 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.628 09:15:24 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.629 09:15:24 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.629 09:15:24 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.629 09:15:24 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.629 09:15:24 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.629 09:15:24 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.629 09:15:24 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.629 09:15:24 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:37.629 09:15:24 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.629 09:15:24 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:37.629 09:15:24 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:37.629 09:15:24 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:37.629 09:15:24 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.629 09:15:24 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.629 09:15:24 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.629 09:15:24 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:37.629 09:15:24 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:37.629 09:15:24 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:37.629 09:15:24 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:37.629 09:15:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:37.629 09:15:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:37.629 09:15:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:37.629 09:15:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:37.629 09:15:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:37.629 09:15:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:37.629 09:15:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:37.629 09:15:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:37.629 09:15:24 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:37.629 09:15:24 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:37.629 INFO: launching applications... 00:06:37.629 09:15:24 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:37.629 09:15:24 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:37.629 09:15:24 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:37.629 09:15:24 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:37.629 09:15:24 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:37.629 09:15:24 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:37.629 09:15:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.629 09:15:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.629 09:15:24 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=474486 00:06:37.629 09:15:24 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:37.629 Waiting for target to run... 00:06:37.629 09:15:24 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 474486 /var/tmp/spdk_tgt.sock 00:06:37.629 09:15:24 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 474486 ']' 00:06:37.629 09:15:24 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:37.629 09:15:24 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:37.629 09:15:24 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.629 09:15:24 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:37.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:37.629 09:15:24 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.629 09:15:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:37.629 [2024-07-15 09:15:24.667964] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:37.629 [2024-07-15 09:15:24.668019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474486 ] 00:06:37.629 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.890 [2024-07-15 09:15:24.960527] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.890 [2024-07-15 09:15:25.012838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.535 09:15:25 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.535 09:15:25 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:38.535 09:15:25 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:38.535 00:06:38.535 09:15:25 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:38.535 INFO: shutting down applications... 00:06:38.535 09:15:25 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:38.535 09:15:25 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:38.535 09:15:25 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:38.535 09:15:25 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 474486 ]] 00:06:38.535 09:15:25 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 474486 00:06:38.535 09:15:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:38.535 09:15:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:38.535 09:15:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 474486 00:06:38.535 09:15:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:38.796 09:15:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:38.796 09:15:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:38.796 09:15:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 474486 00:06:38.796 09:15:25 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:38.796 09:15:25 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:38.796 09:15:25 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:38.796 09:15:25 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:38.796 SPDK target shutdown done 00:06:38.796 09:15:25 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:38.796 Success 00:06:38.796 00:06:38.796 real 0m1.435s 00:06:38.796 user 0m1.070s 00:06:38.796 sys 0m0.385s 00:06:38.796 09:15:25 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.796 09:15:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:38.796 ************************************ 00:06:38.796 END TEST json_config_extra_key 00:06:38.796 ************************************ 00:06:38.796 09:15:25 -- common/autotest_common.sh@1142 -- # return 0 00:06:38.796 09:15:25 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:38.796 09:15:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.796 09:15:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.796 09:15:25 -- common/autotest_common.sh@10 -- # set +x 00:06:39.058 ************************************ 00:06:39.058 START TEST alias_rpc 00:06:39.058 ************************************ 00:06:39.058 09:15:26 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:39.058 * Looking for test storage... 00:06:39.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:39.058 09:15:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:39.058 09:15:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=474875 00:06:39.058 09:15:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 474875 00:06:39.058 09:15:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.058 09:15:26 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 474875 ']' 00:06:39.058 09:15:26 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.058 09:15:26 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.058 09:15:26 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.058 09:15:26 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.058 09:15:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.058 [2024-07-15 09:15:26.176132] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:39.058 [2024-07-15 09:15:26.176183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474875 ] 00:06:39.058 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.058 [2024-07-15 09:15:26.243704] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.319 [2024-07-15 09:15:26.312341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.890 09:15:26 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.890 09:15:26 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:39.890 09:15:26 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:40.155 09:15:27 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 474875 00:06:40.155 09:15:27 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 474875 ']' 00:06:40.155 09:15:27 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 474875 00:06:40.155 09:15:27 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:40.155 09:15:27 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.155 09:15:27 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 474875 00:06:40.155 09:15:27 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.155 09:15:27 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.155 09:15:27 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 474875' 00:06:40.155 killing process with pid 474875 00:06:40.155 09:15:27 alias_rpc -- common/autotest_common.sh@967 -- # kill 474875 00:06:40.155 09:15:27 alias_rpc -- common/autotest_common.sh@972 -- # wait 474875 00:06:40.415 00:06:40.415 real 0m1.376s 00:06:40.415 user 0m1.521s 00:06:40.415 sys 0m0.363s 00:06:40.415 09:15:27 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.415 09:15:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.415 ************************************ 00:06:40.415 END TEST alias_rpc 00:06:40.415 ************************************ 00:06:40.415 09:15:27 -- common/autotest_common.sh@1142 -- # return 0 00:06:40.415 09:15:27 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:40.415 09:15:27 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:40.415 09:15:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.415 09:15:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.415 09:15:27 -- common/autotest_common.sh@10 -- # set +x 00:06:40.415 ************************************ 00:06:40.415 START TEST spdkcli_tcp 00:06:40.415 ************************************ 00:06:40.416 09:15:27 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:40.416 * Looking for test storage... 00:06:40.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:40.416 09:15:27 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:40.416 09:15:27 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:40.416 09:15:27 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:40.416 09:15:27 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:40.416 09:15:27 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:40.416 09:15:27 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:40.416 09:15:27 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:40.416 09:15:27 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:40.416 09:15:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.416 09:15:27 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=475225 00:06:40.416 09:15:27 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 475225 00:06:40.416 09:15:27 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:40.416 09:15:27 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 475225 ']' 00:06:40.416 09:15:27 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.416 09:15:27 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.416 09:15:27 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.416 09:15:27 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.416 09:15:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.676 [2024-07-15 09:15:27.634548] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:40.676 [2024-07-15 09:15:27.634603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475225 ] 00:06:40.676 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.676 [2024-07-15 09:15:27.701322] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.676 [2024-07-15 09:15:27.768925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.676 [2024-07-15 09:15:27.768928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.248 09:15:28 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.248 09:15:28 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:41.248 09:15:28 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=475280 00:06:41.248 09:15:28 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:41.248 09:15:28 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:41.509 [ 00:06:41.509 "bdev_malloc_delete", 00:06:41.509 "bdev_malloc_create", 00:06:41.509 "bdev_null_resize", 00:06:41.509 "bdev_null_delete", 00:06:41.509 "bdev_null_create", 00:06:41.509 "bdev_nvme_cuse_unregister", 00:06:41.509 "bdev_nvme_cuse_register", 00:06:41.509 "bdev_opal_new_user", 00:06:41.509 "bdev_opal_set_lock_state", 00:06:41.509 "bdev_opal_delete", 00:06:41.509 "bdev_opal_get_info", 00:06:41.509 "bdev_opal_create", 00:06:41.509 "bdev_nvme_opal_revert", 00:06:41.509 "bdev_nvme_opal_init", 00:06:41.509 "bdev_nvme_send_cmd", 00:06:41.509 "bdev_nvme_get_path_iostat", 00:06:41.509 "bdev_nvme_get_mdns_discovery_info", 00:06:41.509 "bdev_nvme_stop_mdns_discovery", 00:06:41.509 "bdev_nvme_start_mdns_discovery", 00:06:41.509 "bdev_nvme_set_multipath_policy", 00:06:41.509 "bdev_nvme_set_preferred_path", 00:06:41.509 "bdev_nvme_get_io_paths", 00:06:41.509 "bdev_nvme_remove_error_injection", 00:06:41.509 "bdev_nvme_add_error_injection", 00:06:41.509 "bdev_nvme_get_discovery_info", 00:06:41.509 "bdev_nvme_stop_discovery", 00:06:41.509 "bdev_nvme_start_discovery", 00:06:41.509 "bdev_nvme_get_controller_health_info", 00:06:41.509 "bdev_nvme_disable_controller", 00:06:41.509 "bdev_nvme_enable_controller", 00:06:41.509 "bdev_nvme_reset_controller", 00:06:41.509 "bdev_nvme_get_transport_statistics", 00:06:41.509 "bdev_nvme_apply_firmware", 00:06:41.509 "bdev_nvme_detach_controller", 00:06:41.509 "bdev_nvme_get_controllers", 00:06:41.509 "bdev_nvme_attach_controller", 00:06:41.509 "bdev_nvme_set_hotplug", 00:06:41.510 "bdev_nvme_set_options", 00:06:41.510 "bdev_passthru_delete", 00:06:41.510 "bdev_passthru_create", 00:06:41.510 "bdev_lvol_set_parent_bdev", 00:06:41.510 "bdev_lvol_set_parent", 00:06:41.510 "bdev_lvol_check_shallow_copy", 00:06:41.510 "bdev_lvol_start_shallow_copy", 00:06:41.510 "bdev_lvol_grow_lvstore", 00:06:41.510 "bdev_lvol_get_lvols", 00:06:41.510 "bdev_lvol_get_lvstores", 00:06:41.510 "bdev_lvol_delete", 00:06:41.510 "bdev_lvol_set_read_only", 00:06:41.510 "bdev_lvol_resize", 00:06:41.510 "bdev_lvol_decouple_parent", 00:06:41.510 "bdev_lvol_inflate", 00:06:41.510 "bdev_lvol_rename", 00:06:41.510 "bdev_lvol_clone_bdev", 00:06:41.510 "bdev_lvol_clone", 00:06:41.510 "bdev_lvol_snapshot", 00:06:41.510 "bdev_lvol_create", 00:06:41.510 "bdev_lvol_delete_lvstore", 00:06:41.510 "bdev_lvol_rename_lvstore", 00:06:41.510 "bdev_lvol_create_lvstore", 00:06:41.510 "bdev_raid_set_options", 00:06:41.510 "bdev_raid_remove_base_bdev", 00:06:41.510 "bdev_raid_add_base_bdev", 00:06:41.510 "bdev_raid_delete", 00:06:41.510 "bdev_raid_create", 00:06:41.510 "bdev_raid_get_bdevs", 00:06:41.510 "bdev_error_inject_error", 00:06:41.510 "bdev_error_delete", 00:06:41.510 "bdev_error_create", 00:06:41.510 "bdev_split_delete", 00:06:41.510 "bdev_split_create", 00:06:41.510 "bdev_delay_delete", 00:06:41.510 "bdev_delay_create", 00:06:41.510 "bdev_delay_update_latency", 00:06:41.510 "bdev_zone_block_delete", 00:06:41.510 "bdev_zone_block_create", 00:06:41.510 "blobfs_create", 00:06:41.510 "blobfs_detect", 00:06:41.510 "blobfs_set_cache_size", 00:06:41.510 "bdev_aio_delete", 00:06:41.510 "bdev_aio_rescan", 00:06:41.510 "bdev_aio_create", 00:06:41.510 "bdev_ftl_set_property", 00:06:41.510 "bdev_ftl_get_properties", 00:06:41.510 "bdev_ftl_get_stats", 00:06:41.510 "bdev_ftl_unmap", 00:06:41.510 "bdev_ftl_unload", 00:06:41.510 "bdev_ftl_delete", 00:06:41.510 "bdev_ftl_load", 00:06:41.510 "bdev_ftl_create", 00:06:41.510 "bdev_virtio_attach_controller", 00:06:41.510 "bdev_virtio_scsi_get_devices", 00:06:41.510 "bdev_virtio_detach_controller", 00:06:41.510 "bdev_virtio_blk_set_hotplug", 00:06:41.510 "bdev_iscsi_delete", 00:06:41.510 "bdev_iscsi_create", 00:06:41.510 "bdev_iscsi_set_options", 00:06:41.510 "accel_error_inject_error", 00:06:41.510 "ioat_scan_accel_module", 00:06:41.510 "dsa_scan_accel_module", 00:06:41.510 "iaa_scan_accel_module", 00:06:41.510 "vfu_virtio_create_scsi_endpoint", 00:06:41.510 "vfu_virtio_scsi_remove_target", 00:06:41.510 "vfu_virtio_scsi_add_target", 00:06:41.510 "vfu_virtio_create_blk_endpoint", 00:06:41.510 "vfu_virtio_delete_endpoint", 00:06:41.510 "keyring_file_remove_key", 00:06:41.510 "keyring_file_add_key", 00:06:41.510 "keyring_linux_set_options", 00:06:41.510 "iscsi_get_histogram", 00:06:41.510 "iscsi_enable_histogram", 00:06:41.510 "iscsi_set_options", 00:06:41.510 "iscsi_get_auth_groups", 00:06:41.510 "iscsi_auth_group_remove_secret", 00:06:41.510 "iscsi_auth_group_add_secret", 00:06:41.510 "iscsi_delete_auth_group", 00:06:41.510 "iscsi_create_auth_group", 00:06:41.510 "iscsi_set_discovery_auth", 00:06:41.510 "iscsi_get_options", 00:06:41.510 "iscsi_target_node_request_logout", 00:06:41.510 "iscsi_target_node_set_redirect", 00:06:41.510 "iscsi_target_node_set_auth", 00:06:41.510 "iscsi_target_node_add_lun", 00:06:41.510 "iscsi_get_stats", 00:06:41.510 "iscsi_get_connections", 00:06:41.510 "iscsi_portal_group_set_auth", 00:06:41.510 "iscsi_start_portal_group", 00:06:41.510 "iscsi_delete_portal_group", 00:06:41.510 "iscsi_create_portal_group", 00:06:41.510 "iscsi_get_portal_groups", 00:06:41.510 "iscsi_delete_target_node", 00:06:41.510 "iscsi_target_node_remove_pg_ig_maps", 00:06:41.510 "iscsi_target_node_add_pg_ig_maps", 00:06:41.510 "iscsi_create_target_node", 00:06:41.510 "iscsi_get_target_nodes", 00:06:41.510 "iscsi_delete_initiator_group", 00:06:41.510 "iscsi_initiator_group_remove_initiators", 00:06:41.510 "iscsi_initiator_group_add_initiators", 00:06:41.510 "iscsi_create_initiator_group", 00:06:41.510 "iscsi_get_initiator_groups", 00:06:41.510 "nvmf_set_crdt", 00:06:41.510 "nvmf_set_config", 00:06:41.510 "nvmf_set_max_subsystems", 00:06:41.510 "nvmf_stop_mdns_prr", 00:06:41.510 "nvmf_publish_mdns_prr", 00:06:41.510 "nvmf_subsystem_get_listeners", 00:06:41.510 "nvmf_subsystem_get_qpairs", 00:06:41.510 "nvmf_subsystem_get_controllers", 00:06:41.510 "nvmf_get_stats", 00:06:41.510 "nvmf_get_transports", 00:06:41.510 "nvmf_create_transport", 00:06:41.510 "nvmf_get_targets", 00:06:41.510 "nvmf_delete_target", 00:06:41.510 "nvmf_create_target", 00:06:41.510 "nvmf_subsystem_allow_any_host", 00:06:41.510 "nvmf_subsystem_remove_host", 00:06:41.510 "nvmf_subsystem_add_host", 00:06:41.510 "nvmf_ns_remove_host", 00:06:41.510 "nvmf_ns_add_host", 00:06:41.510 "nvmf_subsystem_remove_ns", 00:06:41.510 "nvmf_subsystem_add_ns", 00:06:41.510 "nvmf_subsystem_listener_set_ana_state", 00:06:41.510 "nvmf_discovery_get_referrals", 00:06:41.510 "nvmf_discovery_remove_referral", 00:06:41.510 "nvmf_discovery_add_referral", 00:06:41.510 "nvmf_subsystem_remove_listener", 00:06:41.510 "nvmf_subsystem_add_listener", 00:06:41.510 "nvmf_delete_subsystem", 00:06:41.510 "nvmf_create_subsystem", 00:06:41.510 "nvmf_get_subsystems", 00:06:41.510 "env_dpdk_get_mem_stats", 00:06:41.510 "nbd_get_disks", 00:06:41.510 "nbd_stop_disk", 00:06:41.510 "nbd_start_disk", 00:06:41.510 "ublk_recover_disk", 00:06:41.510 "ublk_get_disks", 00:06:41.510 "ublk_stop_disk", 00:06:41.510 "ublk_start_disk", 00:06:41.510 "ublk_destroy_target", 00:06:41.510 "ublk_create_target", 00:06:41.510 "virtio_blk_create_transport", 00:06:41.510 "virtio_blk_get_transports", 00:06:41.510 "vhost_controller_set_coalescing", 00:06:41.510 "vhost_get_controllers", 00:06:41.510 "vhost_delete_controller", 00:06:41.510 "vhost_create_blk_controller", 00:06:41.510 "vhost_scsi_controller_remove_target", 00:06:41.510 "vhost_scsi_controller_add_target", 00:06:41.510 "vhost_start_scsi_controller", 00:06:41.510 "vhost_create_scsi_controller", 00:06:41.510 "thread_set_cpumask", 00:06:41.510 "framework_get_governor", 00:06:41.510 "framework_get_scheduler", 00:06:41.510 "framework_set_scheduler", 00:06:41.510 "framework_get_reactors", 00:06:41.510 "thread_get_io_channels", 00:06:41.510 "thread_get_pollers", 00:06:41.510 "thread_get_stats", 00:06:41.510 "framework_monitor_context_switch", 00:06:41.510 "spdk_kill_instance", 00:06:41.510 "log_enable_timestamps", 00:06:41.510 "log_get_flags", 00:06:41.510 "log_clear_flag", 00:06:41.510 "log_set_flag", 00:06:41.510 "log_get_level", 00:06:41.510 "log_set_level", 00:06:41.510 "log_get_print_level", 00:06:41.510 "log_set_print_level", 00:06:41.510 "framework_enable_cpumask_locks", 00:06:41.510 "framework_disable_cpumask_locks", 00:06:41.510 "framework_wait_init", 00:06:41.510 "framework_start_init", 00:06:41.510 "scsi_get_devices", 00:06:41.510 "bdev_get_histogram", 00:06:41.510 "bdev_enable_histogram", 00:06:41.510 "bdev_set_qos_limit", 00:06:41.510 "bdev_set_qd_sampling_period", 00:06:41.510 "bdev_get_bdevs", 00:06:41.510 "bdev_reset_iostat", 00:06:41.510 "bdev_get_iostat", 00:06:41.510 "bdev_examine", 00:06:41.510 "bdev_wait_for_examine", 00:06:41.510 "bdev_set_options", 00:06:41.510 "notify_get_notifications", 00:06:41.510 "notify_get_types", 00:06:41.510 "accel_get_stats", 00:06:41.510 "accel_set_options", 00:06:41.510 "accel_set_driver", 00:06:41.510 "accel_crypto_key_destroy", 00:06:41.510 "accel_crypto_keys_get", 00:06:41.510 "accel_crypto_key_create", 00:06:41.510 "accel_assign_opc", 00:06:41.510 "accel_get_module_info", 00:06:41.510 "accel_get_opc_assignments", 00:06:41.510 "vmd_rescan", 00:06:41.510 "vmd_remove_device", 00:06:41.510 "vmd_enable", 00:06:41.510 "sock_get_default_impl", 00:06:41.510 "sock_set_default_impl", 00:06:41.510 "sock_impl_set_options", 00:06:41.510 "sock_impl_get_options", 00:06:41.510 "iobuf_get_stats", 00:06:41.510 "iobuf_set_options", 00:06:41.510 "keyring_get_keys", 00:06:41.510 "framework_get_pci_devices", 00:06:41.510 "framework_get_config", 00:06:41.510 "framework_get_subsystems", 00:06:41.510 "vfu_tgt_set_base_path", 00:06:41.510 "trace_get_info", 00:06:41.510 "trace_get_tpoint_group_mask", 00:06:41.510 "trace_disable_tpoint_group", 00:06:41.510 "trace_enable_tpoint_group", 00:06:41.510 "trace_clear_tpoint_mask", 00:06:41.510 "trace_set_tpoint_mask", 00:06:41.510 "spdk_get_version", 00:06:41.510 "rpc_get_methods" 00:06:41.510 ] 00:06:41.510 09:15:28 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:41.510 09:15:28 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:41.510 09:15:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.510 09:15:28 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:41.510 09:15:28 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 475225 00:06:41.510 09:15:28 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 475225 ']' 00:06:41.510 09:15:28 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 475225 00:06:41.510 09:15:28 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:41.510 09:15:28 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.510 09:15:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 475225 00:06:41.510 09:15:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:41.510 09:15:28 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:41.510 09:15:28 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 475225' 00:06:41.510 killing process with pid 475225 00:06:41.510 09:15:28 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 475225 00:06:41.510 09:15:28 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 475225 00:06:41.771 00:06:41.771 real 0m1.393s 00:06:41.771 user 0m2.580s 00:06:41.771 sys 0m0.399s 00:06:41.771 09:15:28 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.771 09:15:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.771 ************************************ 00:06:41.771 END TEST spdkcli_tcp 00:06:41.771 ************************************ 00:06:41.771 09:15:28 -- common/autotest_common.sh@1142 -- # return 0 00:06:41.771 09:15:28 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:41.771 09:15:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.771 09:15:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.771 09:15:28 -- common/autotest_common.sh@10 -- # set +x 00:06:41.771 ************************************ 00:06:41.771 START TEST dpdk_mem_utility 00:06:41.771 ************************************ 00:06:41.771 09:15:28 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:42.032 * Looking for test storage... 00:06:42.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:42.032 09:15:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:42.032 09:15:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=475518 00:06:42.032 09:15:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 475518 00:06:42.032 09:15:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:42.032 09:15:29 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 475518 ']' 00:06:42.032 09:15:29 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.032 09:15:29 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.032 09:15:29 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.032 09:15:29 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.032 09:15:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:42.032 [2024-07-15 09:15:29.098219] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:42.032 [2024-07-15 09:15:29.098293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475518 ] 00:06:42.032 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.032 [2024-07-15 09:15:29.170003] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.293 [2024-07-15 09:15:29.245422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.864 09:15:29 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.864 09:15:29 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:42.865 09:15:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:42.865 09:15:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:42.865 09:15:29 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.865 09:15:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:42.865 { 00:06:42.865 "filename": "/tmp/spdk_mem_dump.txt" 00:06:42.865 } 00:06:42.865 09:15:29 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.865 09:15:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:42.865 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:42.865 1 heaps totaling size 814.000000 MiB 00:06:42.865 size: 814.000000 MiB heap id: 0 00:06:42.865 end heaps---------- 00:06:42.865 8 mempools totaling size 598.116089 MiB 00:06:42.865 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:42.865 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:42.865 size: 84.521057 MiB name: bdev_io_475518 00:06:42.865 size: 51.011292 MiB name: evtpool_475518 00:06:42.865 size: 50.003479 MiB name: msgpool_475518 00:06:42.865 size: 21.763794 MiB name: PDU_Pool 00:06:42.865 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:42.865 size: 0.026123 MiB name: Session_Pool 00:06:42.865 end mempools------- 00:06:42.865 6 memzones totaling size 4.142822 MiB 00:06:42.865 size: 1.000366 MiB name: RG_ring_0_475518 00:06:42.865 size: 1.000366 MiB name: RG_ring_1_475518 00:06:42.865 size: 1.000366 MiB name: RG_ring_4_475518 00:06:42.865 size: 1.000366 MiB name: RG_ring_5_475518 00:06:42.865 size: 0.125366 MiB name: RG_ring_2_475518 00:06:42.865 size: 0.015991 MiB name: RG_ring_3_475518 00:06:42.865 end memzones------- 00:06:42.865 09:15:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:42.865 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:42.865 list of free elements. size: 12.519348 MiB 00:06:42.865 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:42.865 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:42.865 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:42.865 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:42.865 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:42.865 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:42.865 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:42.865 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:42.865 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:42.865 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:42.865 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:42.865 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:42.865 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:42.865 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:42.865 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:42.865 list of standard malloc elements. size: 199.218079 MiB 00:06:42.865 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:42.865 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:42.865 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:42.865 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:42.865 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:42.865 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:42.865 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:42.865 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:42.865 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:42.865 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:42.865 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:42.865 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:42.865 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:42.865 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:42.865 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:42.865 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:42.865 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:42.865 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:42.865 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:42.865 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:42.865 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:42.865 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:42.865 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:42.865 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:42.865 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:42.865 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:42.865 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:42.865 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:42.865 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:42.865 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:42.865 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:42.865 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:42.865 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:42.865 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:42.865 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:42.865 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:42.865 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:42.865 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:42.865 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:42.865 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:42.865 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:42.865 list of memzone associated elements. size: 602.262573 MiB 00:06:42.865 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:42.865 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:42.865 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:42.865 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:42.865 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:42.865 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_475518_0 00:06:42.865 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:42.865 associated memzone info: size: 48.002930 MiB name: MP_evtpool_475518_0 00:06:42.865 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:42.865 associated memzone info: size: 48.002930 MiB name: MP_msgpool_475518_0 00:06:42.865 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:42.865 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:42.865 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:42.865 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:42.865 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:42.865 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_475518 00:06:42.865 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:42.865 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_475518 00:06:42.865 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:42.865 associated memzone info: size: 1.007996 MiB name: MP_evtpool_475518 00:06:42.865 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:42.865 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:42.865 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:42.865 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:42.865 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:42.865 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:42.865 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:42.865 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:42.865 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:42.865 associated memzone info: size: 1.000366 MiB name: RG_ring_0_475518 00:06:42.865 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:42.865 associated memzone info: size: 1.000366 MiB name: RG_ring_1_475518 00:06:42.865 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:42.865 associated memzone info: size: 1.000366 MiB name: RG_ring_4_475518 00:06:42.865 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:42.865 associated memzone info: size: 1.000366 MiB name: RG_ring_5_475518 00:06:42.865 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:42.865 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_475518 00:06:42.865 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:42.865 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:42.865 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:42.865 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:42.865 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:42.865 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:42.865 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:42.865 associated memzone info: size: 0.125366 MiB name: RG_ring_2_475518 00:06:42.865 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:42.865 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:42.865 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:42.865 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:42.865 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:42.865 associated memzone info: size: 0.015991 MiB name: RG_ring_3_475518 00:06:42.865 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:42.865 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:42.865 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:42.865 associated memzone info: size: 0.000183 MiB name: MP_msgpool_475518 00:06:42.865 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:42.865 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_475518 00:06:42.865 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:42.865 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:42.865 09:15:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:42.865 09:15:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 475518 00:06:42.865 09:15:29 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 475518 ']' 00:06:42.865 09:15:29 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 475518 00:06:42.865 09:15:29 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:42.865 09:15:29 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.865 09:15:29 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 475518 00:06:42.865 09:15:30 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.866 09:15:30 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.866 09:15:30 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 475518' 00:06:42.866 killing process with pid 475518 00:06:42.866 09:15:30 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 475518 00:06:42.866 09:15:30 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 475518 00:06:43.126 00:06:43.126 real 0m1.283s 00:06:43.126 user 0m1.328s 00:06:43.126 sys 0m0.387s 00:06:43.126 09:15:30 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.126 09:15:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:43.126 ************************************ 00:06:43.126 END TEST dpdk_mem_utility 00:06:43.126 ************************************ 00:06:43.126 09:15:30 -- common/autotest_common.sh@1142 -- # return 0 00:06:43.126 09:15:30 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:43.126 09:15:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.126 09:15:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.126 09:15:30 -- common/autotest_common.sh@10 -- # set +x 00:06:43.126 ************************************ 00:06:43.126 START TEST event 00:06:43.126 ************************************ 00:06:43.126 09:15:30 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:43.387 * Looking for test storage... 00:06:43.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:43.387 09:15:30 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:43.387 09:15:30 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:43.387 09:15:30 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:43.387 09:15:30 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:43.387 09:15:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.387 09:15:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.387 ************************************ 00:06:43.387 START TEST event_perf 00:06:43.387 ************************************ 00:06:43.387 09:15:30 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:43.387 Running I/O for 1 seconds...[2024-07-15 09:15:30.462011] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:43.387 [2024-07-15 09:15:30.462120] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475765 ] 00:06:43.387 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.387 [2024-07-15 09:15:30.537516] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:43.648 [2024-07-15 09:15:30.616816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.648 [2024-07-15 09:15:30.617010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.648 Running I/O for 1 seconds...[2024-07-15 09:15:30.617010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.648 [2024-07-15 09:15:30.616885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.590 00:06:44.590 lcore 0: 177759 00:06:44.590 lcore 1: 177760 00:06:44.590 lcore 2: 177755 00:06:44.590 lcore 3: 177757 00:06:44.590 done. 00:06:44.590 00:06:44.590 real 0m1.231s 00:06:44.590 user 0m4.141s 00:06:44.590 sys 0m0.087s 00:06:44.590 09:15:31 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.590 09:15:31 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:44.590 ************************************ 00:06:44.590 END TEST event_perf 00:06:44.590 ************************************ 00:06:44.590 09:15:31 event -- common/autotest_common.sh@1142 -- # return 0 00:06:44.590 09:15:31 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:44.590 09:15:31 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:44.590 09:15:31 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.590 09:15:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.590 ************************************ 00:06:44.590 START TEST event_reactor 00:06:44.590 ************************************ 00:06:44.590 09:15:31 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:44.590 [2024-07-15 09:15:31.767316] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:44.590 [2024-07-15 09:15:31.767394] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476098 ] 00:06:44.851 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.851 [2024-07-15 09:15:31.836800] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.851 [2024-07-15 09:15:31.900164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.793 test_start 00:06:45.793 oneshot 00:06:45.793 tick 100 00:06:45.793 tick 100 00:06:45.793 tick 250 00:06:45.793 tick 100 00:06:45.793 tick 100 00:06:45.793 tick 100 00:06:45.793 tick 250 00:06:45.793 tick 500 00:06:45.793 tick 100 00:06:45.793 tick 100 00:06:45.793 tick 250 00:06:45.793 tick 100 00:06:45.793 tick 100 00:06:45.793 test_end 00:06:45.793 00:06:45.793 real 0m1.206s 00:06:45.793 user 0m1.125s 00:06:45.793 sys 0m0.077s 00:06:45.793 09:15:32 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.793 09:15:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:45.793 ************************************ 00:06:45.793 END TEST event_reactor 00:06:45.793 ************************************ 00:06:45.793 09:15:32 event -- common/autotest_common.sh@1142 -- # return 0 00:06:45.793 09:15:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:45.793 09:15:32 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:45.793 09:15:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.793 09:15:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.054 ************************************ 00:06:46.054 START TEST event_reactor_perf 00:06:46.054 ************************************ 00:06:46.054 09:15:33 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:46.054 [2024-07-15 09:15:33.049618] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:46.054 [2024-07-15 09:15:33.049713] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476446 ] 00:06:46.054 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.054 [2024-07-15 09:15:33.121953] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.054 [2024-07-15 09:15:33.190522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.435 test_start 00:06:47.435 test_end 00:06:47.435 Performance: 369976 events per second 00:06:47.435 00:06:47.435 real 0m1.215s 00:06:47.435 user 0m1.130s 00:06:47.435 sys 0m0.081s 00:06:47.435 09:15:34 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.435 09:15:34 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.435 ************************************ 00:06:47.435 END TEST event_reactor_perf 00:06:47.435 ************************************ 00:06:47.435 09:15:34 event -- common/autotest_common.sh@1142 -- # return 0 00:06:47.435 09:15:34 event -- event/event.sh@49 -- # uname -s 00:06:47.435 09:15:34 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:47.435 09:15:34 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:47.435 09:15:34 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.435 09:15:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.435 09:15:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.435 ************************************ 00:06:47.435 START TEST event_scheduler 00:06:47.435 ************************************ 00:06:47.435 09:15:34 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:47.436 * Looking for test storage... 00:06:47.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:47.436 09:15:34 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:47.436 09:15:34 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=476831 00:06:47.436 09:15:34 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.436 09:15:34 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:47.436 09:15:34 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 476831 00:06:47.436 09:15:34 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 476831 ']' 00:06:47.436 09:15:34 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.436 09:15:34 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.436 09:15:34 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.436 09:15:34 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.436 09:15:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:47.436 [2024-07-15 09:15:34.479002] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:47.436 [2024-07-15 09:15:34.479071] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476831 ] 00:06:47.436 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.436 [2024-07-15 09:15:34.540945] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:47.436 [2024-07-15 09:15:34.607303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.436 [2024-07-15 09:15:34.607462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.436 [2024-07-15 09:15:34.607578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.436 [2024-07-15 09:15:34.607580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.379 09:15:35 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.379 09:15:35 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:48.379 09:15:35 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:48.379 09:15:35 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.379 09:15:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.379 [2024-07-15 09:15:35.269722] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:48.379 [2024-07-15 09:15:35.269735] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:48.379 [2024-07-15 09:15:35.269743] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:48.380 [2024-07-15 09:15:35.269747] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:48.380 [2024-07-15 09:15:35.269753] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:48.380 09:15:35 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.380 09:15:35 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:48.380 09:15:35 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.380 09:15:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.380 [2024-07-15 09:15:35.323210] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:48.380 09:15:35 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.380 09:15:35 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:48.380 09:15:35 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.380 09:15:35 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.380 09:15:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.380 ************************************ 00:06:48.380 START TEST scheduler_create_thread 00:06:48.380 ************************************ 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.380 2 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.380 3 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.380 4 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.380 5 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.380 6 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.380 7 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.380 8 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.380 9 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.380 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.952 10 00:06:48.952 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.952 09:15:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:48.952 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.952 09:15:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.335 09:15:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.335 09:15:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:50.335 09:15:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:50.335 09:15:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.335 09:15:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.904 09:15:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.904 09:15:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:50.904 09:15:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.905 09:15:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.843 09:15:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.843 09:15:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:51.843 09:15:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:51.843 09:15:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.843 09:15:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.440 09:15:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.440 00:06:52.440 real 0m4.223s 00:06:52.440 user 0m0.027s 00:06:52.440 sys 0m0.004s 00:06:52.440 09:15:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.440 09:15:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.440 ************************************ 00:06:52.440 END TEST scheduler_create_thread 00:06:52.440 ************************************ 00:06:52.440 09:15:39 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:52.440 09:15:39 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:52.440 09:15:39 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 476831 00:06:52.440 09:15:39 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 476831 ']' 00:06:52.440 09:15:39 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 476831 00:06:52.440 09:15:39 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:52.440 09:15:39 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.440 09:15:39 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 476831 00:06:52.700 09:15:39 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:52.700 09:15:39 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:52.700 09:15:39 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 476831' 00:06:52.700 killing process with pid 476831 00:06:52.700 09:15:39 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 476831 00:06:52.700 09:15:39 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 476831 00:06:52.700 [2024-07-15 09:15:39.864349] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:52.960 00:06:52.960 real 0m5.713s 00:06:52.960 user 0m12.733s 00:06:52.960 sys 0m0.371s 00:06:52.960 09:15:40 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.960 09:15:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:52.960 ************************************ 00:06:52.960 END TEST event_scheduler 00:06:52.960 ************************************ 00:06:52.960 09:15:40 event -- common/autotest_common.sh@1142 -- # return 0 00:06:52.960 09:15:40 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:52.960 09:15:40 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:52.960 09:15:40 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.960 09:15:40 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.960 09:15:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.960 ************************************ 00:06:52.960 START TEST app_repeat 00:06:52.960 ************************************ 00:06:52.960 09:15:40 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:52.960 09:15:40 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.960 09:15:40 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.960 09:15:40 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:52.960 09:15:40 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.960 09:15:40 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:52.960 09:15:40 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:52.960 09:15:40 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:52.960 09:15:40 event.app_repeat -- event/event.sh@19 -- # repeat_pid=477895 00:06:52.960 09:15:40 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.960 09:15:40 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:52.960 09:15:40 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 477895' 00:06:52.960 Process app_repeat pid: 477895 00:06:52.960 09:15:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:52.960 09:15:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:52.960 spdk_app_start Round 0 00:06:52.960 09:15:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 477895 /var/tmp/spdk-nbd.sock 00:06:52.960 09:15:40 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 477895 ']' 00:06:52.960 09:15:40 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:52.960 09:15:40 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.960 09:15:40 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:52.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:52.960 09:15:40 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.960 09:15:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:52.960 [2024-07-15 09:15:40.151974] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:52.960 [2024-07-15 09:15:40.152036] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477895 ] 00:06:53.219 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.219 [2024-07-15 09:15:40.220397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.219 [2024-07-15 09:15:40.285421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.219 [2024-07-15 09:15:40.285423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.790 09:15:40 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.790 09:15:40 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:53.790 09:15:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.051 Malloc0 00:06:54.051 09:15:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.051 Malloc1 00:06:54.312 09:15:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:54.312 /dev/nbd0 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:54.312 09:15:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:54.312 09:15:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:54.312 09:15:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:54.312 09:15:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:54.312 09:15:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:54.312 09:15:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:54.312 09:15:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:54.312 09:15:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:54.312 09:15:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.312 1+0 records in 00:06:54.312 1+0 records out 00:06:54.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277362 s, 14.8 MB/s 00:06:54.312 09:15:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.312 09:15:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:54.312 09:15:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.312 09:15:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:54.312 09:15:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.312 09:15:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:54.574 /dev/nbd1 00:06:54.574 09:15:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:54.574 09:15:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:54.574 09:15:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:54.574 09:15:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:54.574 09:15:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:54.574 09:15:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:54.574 09:15:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:54.574 09:15:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:54.574 09:15:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:54.574 09:15:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:54.574 09:15:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.574 1+0 records in 00:06:54.574 1+0 records out 00:06:54.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299989 s, 13.7 MB/s 00:06:54.574 09:15:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.574 09:15:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:54.574 09:15:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.574 09:15:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:54.574 09:15:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:54.574 09:15:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.574 09:15:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.574 09:15:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.574 09:15:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.574 09:15:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:54.836 { 00:06:54.836 "nbd_device": "/dev/nbd0", 00:06:54.836 "bdev_name": "Malloc0" 00:06:54.836 }, 00:06:54.836 { 00:06:54.836 "nbd_device": "/dev/nbd1", 00:06:54.836 "bdev_name": "Malloc1" 00:06:54.836 } 00:06:54.836 ]' 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:54.836 { 00:06:54.836 "nbd_device": "/dev/nbd0", 00:06:54.836 "bdev_name": "Malloc0" 00:06:54.836 }, 00:06:54.836 { 00:06:54.836 "nbd_device": "/dev/nbd1", 00:06:54.836 "bdev_name": "Malloc1" 00:06:54.836 } 00:06:54.836 ]' 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:54.836 /dev/nbd1' 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:54.836 /dev/nbd1' 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:54.836 256+0 records in 00:06:54.836 256+0 records out 00:06:54.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124223 s, 84.4 MB/s 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:54.836 256+0 records in 00:06:54.836 256+0 records out 00:06:54.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016469 s, 63.7 MB/s 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:54.836 256+0 records in 00:06:54.836 256+0 records out 00:06:54.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167056 s, 62.8 MB/s 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.836 09:15:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.097 09:15:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.358 09:15:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.358 09:15:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.358 09:15:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.358 09:15:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.358 09:15:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.358 09:15:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.358 09:15:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:55.358 09:15:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.358 09:15:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.358 09:15:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:55.358 09:15:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:55.358 09:15:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:55.358 09:15:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:55.619 09:15:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:55.619 [2024-07-15 09:15:42.745028] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.619 [2024-07-15 09:15:42.809418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.619 [2024-07-15 09:15:42.809419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.880 [2024-07-15 09:15:42.840677] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:55.880 [2024-07-15 09:15:42.840712] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:58.455 09:15:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:58.455 09:15:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:58.455 spdk_app_start Round 1 00:06:58.455 09:15:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 477895 /var/tmp/spdk-nbd.sock 00:06:58.455 09:15:45 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 477895 ']' 00:06:58.455 09:15:45 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:58.455 09:15:45 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.455 09:15:45 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:58.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:58.455 09:15:45 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.455 09:15:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:58.718 09:15:45 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.718 09:15:45 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:58.718 09:15:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.718 Malloc0 00:06:58.718 09:15:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.979 Malloc1 00:06:58.979 09:15:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.979 09:15:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.979 09:15:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.979 09:15:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.979 09:15:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.979 09:15:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.979 09:15:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.979 09:15:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.979 09:15:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.979 09:15:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.979 09:15:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.979 09:15:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.979 09:15:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:58.979 09:15:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.979 09:15:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.979 09:15:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:59.240 /dev/nbd0 00:06:59.240 09:15:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:59.240 09:15:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:59.240 09:15:46 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:59.240 09:15:46 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:59.240 09:15:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:59.240 09:15:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:59.240 09:15:46 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:59.240 09:15:46 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:59.240 09:15:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:59.240 09:15:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:59.240 09:15:46 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.240 1+0 records in 00:06:59.240 1+0 records out 00:06:59.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234369 s, 17.5 MB/s 00:06:59.240 09:15:46 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.241 09:15:46 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:59.241 09:15:46 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.241 09:15:46 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:59.241 09:15:46 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:59.241 09:15:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.241 09:15:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.241 09:15:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:59.241 /dev/nbd1 00:06:59.241 09:15:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:59.241 09:15:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:59.241 09:15:46 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:59.241 09:15:46 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:59.241 09:15:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:59.241 09:15:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:59.241 09:15:46 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:59.241 09:15:46 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:59.241 09:15:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:59.241 09:15:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:59.241 09:15:46 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.505 1+0 records in 00:06:59.505 1+0 records out 00:06:59.505 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027964 s, 14.6 MB/s 00:06:59.505 09:15:46 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.505 09:15:46 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:59.505 09:15:46 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.505 09:15:46 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:59.505 09:15:46 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.505 { 00:06:59.505 "nbd_device": "/dev/nbd0", 00:06:59.505 "bdev_name": "Malloc0" 00:06:59.505 }, 00:06:59.505 { 00:06:59.505 "nbd_device": "/dev/nbd1", 00:06:59.505 "bdev_name": "Malloc1" 00:06:59.505 } 00:06:59.505 ]' 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.505 { 00:06:59.505 "nbd_device": "/dev/nbd0", 00:06:59.505 "bdev_name": "Malloc0" 00:06:59.505 }, 00:06:59.505 { 00:06:59.505 "nbd_device": "/dev/nbd1", 00:06:59.505 "bdev_name": "Malloc1" 00:06:59.505 } 00:06:59.505 ]' 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.505 /dev/nbd1' 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.505 /dev/nbd1' 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:59.505 256+0 records in 00:06:59.505 256+0 records out 00:06:59.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115656 s, 90.7 MB/s 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.505 09:15:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.808 256+0 records in 00:06:59.808 256+0 records out 00:06:59.808 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0350829 s, 29.9 MB/s 00:06:59.808 09:15:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.808 09:15:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.808 256+0 records in 00:06:59.808 256+0 records out 00:06:59.808 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311174 s, 33.7 MB/s 00:06:59.808 09:15:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:59.808 09:15:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.808 09:15:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.808 09:15:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:59.808 09:15:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.808 09:15:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:59.808 09:15:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:59.808 09:15:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.809 09:15:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.086 09:15:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.347 09:15:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:00.347 09:15:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:00.347 09:15:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:00.347 09:15:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:00.347 09:15:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:00.608 [2024-07-15 09:15:47.587408] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.608 [2024-07-15 09:15:47.650808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.608 [2024-07-15 09:15:47.650810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.608 [2024-07-15 09:15:47.682934] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:00.608 [2024-07-15 09:15:47.682973] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:03.920 09:15:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:03.920 09:15:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:03.920 spdk_app_start Round 2 00:07:03.920 09:15:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 477895 /var/tmp/spdk-nbd.sock 00:07:03.920 09:15:50 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 477895 ']' 00:07:03.920 09:15:50 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.920 09:15:50 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.920 09:15:50 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.920 09:15:50 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.920 09:15:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:03.920 09:15:50 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.920 09:15:50 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:03.920 09:15:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.920 Malloc0 00:07:03.920 09:15:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.920 Malloc1 00:07:03.920 09:15:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.920 09:15:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.920 09:15:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.920 09:15:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:03.920 09:15:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.920 09:15:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:03.920 09:15:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.920 09:15:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.920 09:15:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.920 09:15:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:03.920 09:15:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.920 09:15:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:03.920 09:15:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:03.920 09:15:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:03.920 09:15:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.920 09:15:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:03.920 /dev/nbd0 00:07:03.920 09:15:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:03.920 09:15:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:03.920 09:15:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:03.920 09:15:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:03.920 09:15:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:03.920 09:15:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:03.920 09:15:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:03.920 09:15:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:03.920 09:15:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:03.920 09:15:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:03.920 09:15:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.920 1+0 records in 00:07:03.920 1+0 records out 00:07:03.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254291 s, 16.1 MB/s 00:07:03.920 09:15:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.182 09:15:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:04.182 09:15:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.182 09:15:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:04.182 09:15:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:04.182 09:15:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.182 09:15:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.182 09:15:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:04.182 /dev/nbd1 00:07:04.182 09:15:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:04.182 09:15:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:04.182 09:15:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:04.182 09:15:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:04.182 09:15:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:04.182 09:15:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:04.182 09:15:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:04.182 09:15:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:04.182 09:15:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:04.182 09:15:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:04.182 09:15:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.182 1+0 records in 00:07:04.182 1+0 records out 00:07:04.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260801 s, 15.7 MB/s 00:07:04.182 09:15:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.182 09:15:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:04.182 09:15:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.182 09:15:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:04.182 09:15:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:04.182 09:15:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.182 09:15:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.182 09:15:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.182 09:15:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.182 09:15:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:04.444 { 00:07:04.444 "nbd_device": "/dev/nbd0", 00:07:04.444 "bdev_name": "Malloc0" 00:07:04.444 }, 00:07:04.444 { 00:07:04.444 "nbd_device": "/dev/nbd1", 00:07:04.444 "bdev_name": "Malloc1" 00:07:04.444 } 00:07:04.444 ]' 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:04.444 { 00:07:04.444 "nbd_device": "/dev/nbd0", 00:07:04.444 "bdev_name": "Malloc0" 00:07:04.444 }, 00:07:04.444 { 00:07:04.444 "nbd_device": "/dev/nbd1", 00:07:04.444 "bdev_name": "Malloc1" 00:07:04.444 } 00:07:04.444 ]' 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:04.444 /dev/nbd1' 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:04.444 /dev/nbd1' 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:04.444 256+0 records in 00:07:04.444 256+0 records out 00:07:04.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124711 s, 84.1 MB/s 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:04.444 256+0 records in 00:07:04.444 256+0 records out 00:07:04.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0398884 s, 26.3 MB/s 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:04.444 256+0 records in 00:07:04.444 256+0 records out 00:07:04.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0422525 s, 24.8 MB/s 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.444 09:15:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:04.705 09:15:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.705 09:15:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:04.705 09:15:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.705 09:15:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.705 09:15:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.705 09:15:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:04.705 09:15:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.705 09:15:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.705 09:15:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.705 09:15:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.705 09:15:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.705 09:15:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.705 09:15:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.705 09:15:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.705 09:15:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.705 09:15:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.705 09:15:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.705 09:15:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:04.966 09:15:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:04.966 09:15:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:04.966 09:15:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:04.966 09:15:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.966 09:15:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.966 09:15:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:04.966 09:15:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.966 09:15:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.966 09:15:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.966 09:15:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.966 09:15:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.966 09:15:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:04.966 09:15:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:04.966 09:15:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.227 09:15:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.227 09:15:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.227 09:15:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.227 09:15:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:05.227 09:15:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.227 09:15:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.227 09:15:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:05.227 09:15:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:05.227 09:15:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:05.227 09:15:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:05.227 09:15:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:05.487 [2024-07-15 09:15:52.470971] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.487 [2024-07-15 09:15:52.534323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.487 [2024-07-15 09:15:52.534324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.487 [2024-07-15 09:15:52.565600] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:05.488 [2024-07-15 09:15:52.565635] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:08.786 09:15:55 event.app_repeat -- event/event.sh@38 -- # waitforlisten 477895 /var/tmp/spdk-nbd.sock 00:07:08.786 09:15:55 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 477895 ']' 00:07:08.786 09:15:55 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:08.786 09:15:55 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.787 09:15:55 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:08.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:08.787 09:15:55 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.787 09:15:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:08.787 09:15:55 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.787 09:15:55 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:08.787 09:15:55 event.app_repeat -- event/event.sh@39 -- # killprocess 477895 00:07:08.787 09:15:55 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 477895 ']' 00:07:08.787 09:15:55 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 477895 00:07:08.787 09:15:55 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:08.787 09:15:55 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:08.787 09:15:55 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 477895 00:07:08.787 09:15:55 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:08.787 09:15:55 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:08.787 09:15:55 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 477895' 00:07:08.787 killing process with pid 477895 00:07:08.787 09:15:55 event.app_repeat -- common/autotest_common.sh@967 -- # kill 477895 00:07:08.787 09:15:55 event.app_repeat -- common/autotest_common.sh@972 -- # wait 477895 00:07:08.787 spdk_app_start is called in Round 0. 00:07:08.787 Shutdown signal received, stop current app iteration 00:07:08.787 Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 reinitialization... 00:07:08.787 spdk_app_start is called in Round 1. 00:07:08.787 Shutdown signal received, stop current app iteration 00:07:08.787 Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 reinitialization... 00:07:08.787 spdk_app_start is called in Round 2. 00:07:08.787 Shutdown signal received, stop current app iteration 00:07:08.787 Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 reinitialization... 00:07:08.787 spdk_app_start is called in Round 3. 00:07:08.787 Shutdown signal received, stop current app iteration 00:07:08.787 09:15:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:08.787 09:15:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:08.787 00:07:08.787 real 0m15.548s 00:07:08.787 user 0m33.563s 00:07:08.787 sys 0m2.108s 00:07:08.787 09:15:55 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.787 09:15:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:08.787 ************************************ 00:07:08.787 END TEST app_repeat 00:07:08.787 ************************************ 00:07:08.787 09:15:55 event -- common/autotest_common.sh@1142 -- # return 0 00:07:08.787 09:15:55 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:08.787 09:15:55 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:08.787 09:15:55 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.787 09:15:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.787 09:15:55 event -- common/autotest_common.sh@10 -- # set +x 00:07:08.787 ************************************ 00:07:08.787 START TEST cpu_locks 00:07:08.787 ************************************ 00:07:08.787 09:15:55 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:08.787 * Looking for test storage... 00:07:08.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:08.787 09:15:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:08.787 09:15:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:08.787 09:15:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:08.787 09:15:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:08.787 09:15:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.787 09:15:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.787 09:15:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.787 ************************************ 00:07:08.787 START TEST default_locks 00:07:08.787 ************************************ 00:07:08.787 09:15:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:08.787 09:15:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=481311 00:07:08.787 09:15:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 481311 00:07:08.787 09:15:55 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 481311 ']' 00:07:08.787 09:15:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.787 09:15:55 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.787 09:15:55 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.787 09:15:55 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.787 09:15:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.787 09:15:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.787 [2024-07-15 09:15:55.935952] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:08.787 [2024-07-15 09:15:55.936021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481311 ] 00:07:08.787 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.049 [2024-07-15 09:15:56.007404] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.049 [2024-07-15 09:15:56.083080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.620 09:15:56 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.620 09:15:56 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:09.620 09:15:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 481311 00:07:09.620 09:15:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 481311 00:07:09.620 09:15:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.191 lslocks: write error 00:07:10.191 09:15:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 481311 00:07:10.191 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 481311 ']' 00:07:10.191 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 481311 00:07:10.191 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:10.191 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.191 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 481311 00:07:10.191 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:10.191 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:10.191 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 481311' 00:07:10.191 killing process with pid 481311 00:07:10.191 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 481311 00:07:10.191 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 481311 00:07:10.191 09:15:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 481311 00:07:10.191 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:10.191 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 481311 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 481311 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 481311 ']' 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (481311) - No such process 00:07:10.453 ERROR: process (pid: 481311) is no longer running 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:10.453 00:07:10.453 real 0m1.520s 00:07:10.453 user 0m1.587s 00:07:10.453 sys 0m0.545s 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.453 09:15:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.453 ************************************ 00:07:10.453 END TEST default_locks 00:07:10.453 ************************************ 00:07:10.453 09:15:57 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:10.453 09:15:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:10.453 09:15:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.453 09:15:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.453 09:15:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.453 ************************************ 00:07:10.453 START TEST default_locks_via_rpc 00:07:10.453 ************************************ 00:07:10.453 09:15:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:10.453 09:15:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=481623 00:07:10.453 09:15:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 481623 00:07:10.453 09:15:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.453 09:15:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 481623 ']' 00:07:10.453 09:15:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.453 09:15:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.453 09:15:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.453 09:15:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.453 09:15:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.453 [2024-07-15 09:15:57.529835] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:10.453 [2024-07-15 09:15:57.529886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481623 ] 00:07:10.453 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.453 [2024-07-15 09:15:57.597270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.714 [2024-07-15 09:15:57.666738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.286 09:15:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.286 09:15:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:11.286 09:15:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:11.286 09:15:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.286 09:15:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.286 09:15:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.286 09:15:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:11.286 09:15:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:11.286 09:15:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:11.286 09:15:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:11.286 09:15:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:11.286 09:15:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.286 09:15:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.286 09:15:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.286 09:15:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 481623 00:07:11.286 09:15:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 481623 00:07:11.286 09:15:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.858 09:15:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 481623 00:07:11.858 09:15:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 481623 ']' 00:07:11.858 09:15:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 481623 00:07:11.858 09:15:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:11.858 09:15:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.858 09:15:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 481623 00:07:11.858 09:15:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.858 09:15:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.858 09:15:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 481623' 00:07:11.858 killing process with pid 481623 00:07:11.858 09:15:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 481623 00:07:11.858 09:15:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 481623 00:07:12.118 00:07:12.118 real 0m1.649s 00:07:12.118 user 0m1.734s 00:07:12.118 sys 0m0.572s 00:07:12.118 09:15:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.118 09:15:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.118 ************************************ 00:07:12.118 END TEST default_locks_via_rpc 00:07:12.118 ************************************ 00:07:12.118 09:15:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:12.118 09:15:59 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:12.119 09:15:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:12.119 09:15:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.119 09:15:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.119 ************************************ 00:07:12.119 START TEST non_locking_app_on_locked_coremask 00:07:12.119 ************************************ 00:07:12.119 09:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:12.119 09:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=482010 00:07:12.119 09:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 482010 /var/tmp/spdk.sock 00:07:12.119 09:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:12.119 09:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 482010 ']' 00:07:12.119 09:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.119 09:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.119 09:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.119 09:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.119 09:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.119 [2024-07-15 09:15:59.249114] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:12.119 [2024-07-15 09:15:59.249164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482010 ] 00:07:12.119 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.119 [2024-07-15 09:15:59.313992] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.379 [2024-07-15 09:15:59.378651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.951 09:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.951 09:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:12.951 09:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=482215 00:07:12.951 09:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 482215 /var/tmp/spdk2.sock 00:07:12.951 09:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:12.951 09:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 482215 ']' 00:07:12.951 09:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.951 09:16:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.951 09:16:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.951 09:16:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.951 09:16:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.951 [2024-07-15 09:16:00.055225] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:12.951 [2024-07-15 09:16:00.055285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482215 ] 00:07:12.951 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.212 [2024-07-15 09:16:00.153659] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.212 [2024-07-15 09:16:00.153694] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.212 [2024-07-15 09:16:00.284530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.784 09:16:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.784 09:16:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:13.784 09:16:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 482010 00:07:13.784 09:16:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 482010 00:07:13.784 09:16:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.044 lslocks: write error 00:07:14.044 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 482010 00:07:14.044 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 482010 ']' 00:07:14.044 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 482010 00:07:14.044 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:14.044 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.044 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 482010 00:07:14.044 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:14.044 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:14.044 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 482010' 00:07:14.044 killing process with pid 482010 00:07:14.044 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 482010 00:07:14.044 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 482010 00:07:14.613 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 482215 00:07:14.613 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 482215 ']' 00:07:14.613 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 482215 00:07:14.613 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:14.613 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.613 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 482215 00:07:14.613 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:14.613 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:14.613 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 482215' 00:07:14.613 killing process with pid 482215 00:07:14.613 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 482215 00:07:14.613 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 482215 00:07:14.873 00:07:14.873 real 0m2.640s 00:07:14.873 user 0m2.877s 00:07:14.873 sys 0m0.770s 00:07:14.873 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.873 09:16:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.873 ************************************ 00:07:14.873 END TEST non_locking_app_on_locked_coremask 00:07:14.873 ************************************ 00:07:14.873 09:16:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:14.873 09:16:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:14.873 09:16:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.873 09:16:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.873 09:16:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.873 ************************************ 00:07:14.873 START TEST locking_app_on_unlocked_coremask 00:07:14.873 ************************************ 00:07:14.873 09:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:14.873 09:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=482590 00:07:14.873 09:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 482590 /var/tmp/spdk.sock 00:07:14.873 09:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:14.873 09:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 482590 ']' 00:07:14.873 09:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.873 09:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.873 09:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.873 09:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.873 09:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.873 [2024-07-15 09:16:01.962002] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:14.874 [2024-07-15 09:16:01.962050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482590 ] 00:07:14.874 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.874 [2024-07-15 09:16:02.026815] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.874 [2024-07-15 09:16:02.026844] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.135 [2024-07-15 09:16:02.090656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.707 09:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.707 09:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:15.707 09:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=482809 00:07:15.707 09:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 482809 /var/tmp/spdk2.sock 00:07:15.707 09:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 482809 ']' 00:07:15.707 09:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:15.707 09:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.707 09:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.707 09:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.707 09:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.707 09:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.707 [2024-07-15 09:16:02.785194] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:15.707 [2024-07-15 09:16:02.785248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482809 ] 00:07:15.707 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.707 [2024-07-15 09:16:02.885776] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.968 [2024-07-15 09:16:03.015097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.539 09:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.539 09:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:16.539 09:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 482809 00:07:16.539 09:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 482809 00:07:16.539 09:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.798 lslocks: write error 00:07:16.798 09:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 482590 00:07:16.798 09:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 482590 ']' 00:07:16.798 09:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 482590 00:07:16.798 09:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:16.798 09:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:16.798 09:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 482590 00:07:16.798 09:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:16.798 09:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:16.798 09:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 482590' 00:07:16.798 killing process with pid 482590 00:07:16.798 09:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 482590 00:07:16.798 09:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 482590 00:07:17.366 09:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 482809 00:07:17.366 09:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 482809 ']' 00:07:17.366 09:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 482809 00:07:17.366 09:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:17.366 09:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:17.366 09:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 482809 00:07:17.366 09:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:17.366 09:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:17.366 09:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 482809' 00:07:17.366 killing process with pid 482809 00:07:17.366 09:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 482809 00:07:17.367 09:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 482809 00:07:17.625 00:07:17.626 real 0m2.766s 00:07:17.626 user 0m3.020s 00:07:17.626 sys 0m0.826s 00:07:17.626 09:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.626 09:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.626 ************************************ 00:07:17.626 END TEST locking_app_on_unlocked_coremask 00:07:17.626 ************************************ 00:07:17.626 09:16:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:17.626 09:16:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:17.626 09:16:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.626 09:16:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.626 09:16:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.626 ************************************ 00:07:17.626 START TEST locking_app_on_locked_coremask 00:07:17.626 ************************************ 00:07:17.626 09:16:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:17.626 09:16:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=483287 00:07:17.626 09:16:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 483287 /var/tmp/spdk.sock 00:07:17.626 09:16:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:17.626 09:16:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 483287 ']' 00:07:17.626 09:16:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.626 09:16:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.626 09:16:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.626 09:16:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.626 09:16:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.626 [2024-07-15 09:16:04.806239] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:17.626 [2024-07-15 09:16:04.806289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483287 ] 00:07:17.885 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.885 [2024-07-15 09:16:04.872787] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.885 [2024-07-15 09:16:04.936885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.453 09:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.453 09:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:18.453 09:16:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=483305 00:07:18.453 09:16:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:18.453 09:16:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 483305 /var/tmp/spdk2.sock 00:07:18.453 09:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:18.453 09:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 483305 /var/tmp/spdk2.sock 00:07:18.453 09:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:18.453 09:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.453 09:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:18.453 09:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.453 09:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 483305 /var/tmp/spdk2.sock 00:07:18.453 09:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 483305 ']' 00:07:18.453 09:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.453 09:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.453 09:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.453 09:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.453 09:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.453 [2024-07-15 09:16:05.621961] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:18.453 [2024-07-15 09:16:05.622013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483305 ] 00:07:18.453 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.712 [2024-07-15 09:16:05.722595] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 483287 has claimed it. 00:07:18.712 [2024-07-15 09:16:05.722641] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:19.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (483305) - No such process 00:07:19.283 ERROR: process (pid: 483305) is no longer running 00:07:19.283 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.283 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:19.283 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:19.283 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.283 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.283 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.283 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 483287 00:07:19.283 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 483287 00:07:19.283 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:19.543 lslocks: write error 00:07:19.543 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 483287 00:07:19.543 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 483287 ']' 00:07:19.543 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 483287 00:07:19.543 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:19.543 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:19.543 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 483287 00:07:19.543 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:19.543 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:19.543 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 483287' 00:07:19.543 killing process with pid 483287 00:07:19.543 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 483287 00:07:19.543 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 483287 00:07:19.804 00:07:19.804 real 0m2.166s 00:07:19.804 user 0m2.399s 00:07:19.804 sys 0m0.608s 00:07:19.804 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.804 09:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.804 ************************************ 00:07:19.804 END TEST locking_app_on_locked_coremask 00:07:19.804 ************************************ 00:07:19.804 09:16:06 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:19.804 09:16:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:19.804 09:16:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.804 09:16:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.804 09:16:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.804 ************************************ 00:07:19.804 START TEST locking_overlapped_coremask 00:07:19.804 ************************************ 00:07:19.804 09:16:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:19.804 09:16:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=483670 00:07:19.804 09:16:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 483670 /var/tmp/spdk.sock 00:07:19.804 09:16:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:19.804 09:16:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 483670 ']' 00:07:19.804 09:16:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.804 09:16:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.804 09:16:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.804 09:16:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.804 09:16:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.065 [2024-07-15 09:16:07.044578] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:20.065 [2024-07-15 09:16:07.044627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483670 ] 00:07:20.065 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.065 [2024-07-15 09:16:07.111272] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.065 [2024-07-15 09:16:07.177870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.065 [2024-07-15 09:16:07.178043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.065 [2024-07-15 09:16:07.178046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.635 09:16:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.635 09:16:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:20.635 09:16:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=483911 00:07:20.636 09:16:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 483911 /var/tmp/spdk2.sock 00:07:20.636 09:16:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:20.636 09:16:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:20.636 09:16:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 483911 /var/tmp/spdk2.sock 00:07:20.636 09:16:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:20.636 09:16:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.636 09:16:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:20.636 09:16:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.636 09:16:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 483911 /var/tmp/spdk2.sock 00:07:20.636 09:16:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 483911 ']' 00:07:20.636 09:16:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.636 09:16:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.636 09:16:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.636 09:16:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.636 09:16:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.896 [2024-07-15 09:16:07.868428] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:20.896 [2024-07-15 09:16:07.868485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483911 ] 00:07:20.896 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.896 [2024-07-15 09:16:07.950440] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 483670 has claimed it. 00:07:20.896 [2024-07-15 09:16:07.950474] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:21.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (483911) - No such process 00:07:21.467 ERROR: process (pid: 483911) is no longer running 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 483670 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 483670 ']' 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 483670 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 483670 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 483670' 00:07:21.467 killing process with pid 483670 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 483670 00:07:21.467 09:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 483670 00:07:21.756 00:07:21.756 real 0m1.749s 00:07:21.756 user 0m4.961s 00:07:21.756 sys 0m0.364s 00:07:21.756 09:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.756 09:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.756 ************************************ 00:07:21.756 END TEST locking_overlapped_coremask 00:07:21.756 ************************************ 00:07:21.756 09:16:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:21.756 09:16:08 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:21.756 09:16:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.756 09:16:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.756 09:16:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.756 ************************************ 00:07:21.756 START TEST locking_overlapped_coremask_via_rpc 00:07:21.756 ************************************ 00:07:21.757 09:16:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:21.757 09:16:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=484040 00:07:21.757 09:16:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 484040 /var/tmp/spdk.sock 00:07:21.757 09:16:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:21.757 09:16:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 484040 ']' 00:07:21.757 09:16:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.757 09:16:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.757 09:16:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.757 09:16:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.757 09:16:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.757 [2024-07-15 09:16:08.872927] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:21.757 [2024-07-15 09:16:08.872973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484040 ] 00:07:21.757 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.048 [2024-07-15 09:16:08.938522] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:22.048 [2024-07-15 09:16:08.938548] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.048 [2024-07-15 09:16:09.004885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.048 [2024-07-15 09:16:09.005000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.048 [2024-07-15 09:16:09.005003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.621 09:16:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.621 09:16:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:22.621 09:16:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=484371 00:07:22.621 09:16:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 484371 /var/tmp/spdk2.sock 00:07:22.621 09:16:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 484371 ']' 00:07:22.621 09:16:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:22.621 09:16:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.621 09:16:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.621 09:16:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.621 09:16:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.621 09:16:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.621 [2024-07-15 09:16:09.693894] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:22.621 [2024-07-15 09:16:09.693948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484371 ] 00:07:22.621 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.621 [2024-07-15 09:16:09.775769] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:22.621 [2024-07-15 09:16:09.775793] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.882 [2024-07-15 09:16:09.881807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.882 [2024-07-15 09:16:09.884873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.882 [2024-07-15 09:16:09.884874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.455 [2024-07-15 09:16:10.471812] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 484040 has claimed it. 00:07:23.455 request: 00:07:23.455 { 00:07:23.455 "method": "framework_enable_cpumask_locks", 00:07:23.455 "req_id": 1 00:07:23.455 } 00:07:23.455 Got JSON-RPC error response 00:07:23.455 response: 00:07:23.455 { 00:07:23.455 "code": -32603, 00:07:23.455 "message": "Failed to claim CPU core: 2" 00:07:23.455 } 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 484040 /var/tmp/spdk.sock 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 484040 ']' 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 484371 /var/tmp/spdk2.sock 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 484371 ']' 00:07:23.455 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.716 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.716 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.716 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.716 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.716 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.716 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:23.716 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:23.716 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:23.716 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:23.716 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:23.716 00:07:23.716 real 0m1.997s 00:07:23.716 user 0m0.775s 00:07:23.716 sys 0m0.149s 00:07:23.716 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.716 09:16:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.716 ************************************ 00:07:23.716 END TEST locking_overlapped_coremask_via_rpc 00:07:23.716 ************************************ 00:07:23.716 09:16:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:23.716 09:16:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:23.716 09:16:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 484040 ]] 00:07:23.716 09:16:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 484040 00:07:23.716 09:16:10 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 484040 ']' 00:07:23.716 09:16:10 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 484040 00:07:23.716 09:16:10 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:23.716 09:16:10 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.716 09:16:10 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 484040 00:07:23.716 09:16:10 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:23.716 09:16:10 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:23.716 09:16:10 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 484040' 00:07:23.716 killing process with pid 484040 00:07:23.716 09:16:10 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 484040 00:07:23.716 09:16:10 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 484040 00:07:23.977 09:16:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 484371 ]] 00:07:23.977 09:16:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 484371 00:07:23.977 09:16:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 484371 ']' 00:07:23.977 09:16:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 484371 00:07:23.977 09:16:11 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:23.977 09:16:11 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.977 09:16:11 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 484371 00:07:24.238 09:16:11 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:24.238 09:16:11 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:24.238 09:16:11 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 484371' 00:07:24.238 killing process with pid 484371 00:07:24.238 09:16:11 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 484371 00:07:24.238 09:16:11 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 484371 00:07:24.238 09:16:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:24.238 09:16:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:24.238 09:16:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 484040 ]] 00:07:24.238 09:16:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 484040 00:07:24.238 09:16:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 484040 ']' 00:07:24.238 09:16:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 484040 00:07:24.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (484040) - No such process 00:07:24.238 09:16:11 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 484040 is not found' 00:07:24.238 Process with pid 484040 is not found 00:07:24.238 09:16:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 484371 ]] 00:07:24.238 09:16:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 484371 00:07:24.238 09:16:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 484371 ']' 00:07:24.238 09:16:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 484371 00:07:24.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (484371) - No such process 00:07:24.238 09:16:11 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 484371 is not found' 00:07:24.238 Process with pid 484371 is not found 00:07:24.238 09:16:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:24.238 00:07:24.238 real 0m15.643s 00:07:24.238 user 0m26.885s 00:07:24.238 sys 0m4.722s 00:07:24.238 09:16:11 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.238 09:16:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.238 ************************************ 00:07:24.238 END TEST cpu_locks 00:07:24.238 ************************************ 00:07:24.238 09:16:11 event -- common/autotest_common.sh@1142 -- # return 0 00:07:24.238 00:07:24.238 real 0m41.121s 00:07:24.238 user 1m19.773s 00:07:24.238 sys 0m7.842s 00:07:24.238 09:16:11 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.238 09:16:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:24.238 ************************************ 00:07:24.238 END TEST event 00:07:24.238 ************************************ 00:07:24.500 09:16:11 -- common/autotest_common.sh@1142 -- # return 0 00:07:24.500 09:16:11 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:24.500 09:16:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:24.500 09:16:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.500 09:16:11 -- common/autotest_common.sh@10 -- # set +x 00:07:24.500 ************************************ 00:07:24.500 START TEST thread 00:07:24.500 ************************************ 00:07:24.500 09:16:11 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:24.500 * Looking for test storage... 00:07:24.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:24.500 09:16:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:24.500 09:16:11 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:24.500 09:16:11 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.500 09:16:11 thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.500 ************************************ 00:07:24.500 START TEST thread_poller_perf 00:07:24.500 ************************************ 00:07:24.500 09:16:11 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:24.500 [2024-07-15 09:16:11.647848] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:24.500 [2024-07-15 09:16:11.647945] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484815 ] 00:07:24.500 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.761 [2024-07-15 09:16:11.719869] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.761 [2024-07-15 09:16:11.789439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.761 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:25.704 ====================================== 00:07:25.704 busy:2413894572 (cyc) 00:07:25.704 total_run_count: 288000 00:07:25.704 tsc_hz: 2400000000 (cyc) 00:07:25.704 ====================================== 00:07:25.704 poller_cost: 8381 (cyc), 3492 (nsec) 00:07:25.704 00:07:25.704 real 0m1.226s 00:07:25.704 user 0m1.143s 00:07:25.704 sys 0m0.079s 00:07:25.704 09:16:12 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.704 09:16:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:25.704 ************************************ 00:07:25.704 END TEST thread_poller_perf 00:07:25.704 ************************************ 00:07:25.704 09:16:12 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:25.704 09:16:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:25.704 09:16:12 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:25.704 09:16:12 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.704 09:16:12 thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.965 ************************************ 00:07:25.965 START TEST thread_poller_perf 00:07:25.965 ************************************ 00:07:25.965 09:16:12 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:25.965 [2024-07-15 09:16:12.951544] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:25.965 [2024-07-15 09:16:12.951640] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485119 ] 00:07:25.965 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.965 [2024-07-15 09:16:13.023598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.965 [2024-07-15 09:16:13.092065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.965 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:27.356 ====================================== 00:07:27.356 busy:2401840800 (cyc) 00:07:27.356 total_run_count: 3810000 00:07:27.356 tsc_hz: 2400000000 (cyc) 00:07:27.356 ====================================== 00:07:27.356 poller_cost: 630 (cyc), 262 (nsec) 00:07:27.356 00:07:27.356 real 0m1.216s 00:07:27.356 user 0m1.137s 00:07:27.356 sys 0m0.075s 00:07:27.356 09:16:14 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.356 09:16:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:27.356 ************************************ 00:07:27.356 END TEST thread_poller_perf 00:07:27.356 ************************************ 00:07:27.356 09:16:14 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:27.356 09:16:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:27.356 00:07:27.356 real 0m2.688s 00:07:27.356 user 0m2.372s 00:07:27.356 sys 0m0.321s 00:07:27.356 09:16:14 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.356 09:16:14 thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.356 ************************************ 00:07:27.356 END TEST thread 00:07:27.356 ************************************ 00:07:27.356 09:16:14 -- common/autotest_common.sh@1142 -- # return 0 00:07:27.356 09:16:14 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:27.356 09:16:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:27.356 09:16:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.356 09:16:14 -- common/autotest_common.sh@10 -- # set +x 00:07:27.356 ************************************ 00:07:27.356 START TEST accel 00:07:27.356 ************************************ 00:07:27.356 09:16:14 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:27.356 * Looking for test storage... 00:07:27.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:27.356 09:16:14 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:27.356 09:16:14 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:27.356 09:16:14 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:27.356 09:16:14 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=485381 00:07:27.356 09:16:14 accel -- accel/accel.sh@63 -- # waitforlisten 485381 00:07:27.356 09:16:14 accel -- common/autotest_common.sh@829 -- # '[' -z 485381 ']' 00:07:27.356 09:16:14 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.356 09:16:14 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.356 09:16:14 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.356 09:16:14 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:27.356 09:16:14 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.356 09:16:14 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:27.356 09:16:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.356 09:16:14 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.356 09:16:14 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.356 09:16:14 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.356 09:16:14 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.356 09:16:14 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.356 09:16:14 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:27.356 09:16:14 accel -- accel/accel.sh@41 -- # jq -r . 00:07:27.356 [2024-07-15 09:16:14.421658] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:27.356 [2024-07-15 09:16:14.421723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485381 ] 00:07:27.356 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.356 [2024-07-15 09:16:14.494855] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.617 [2024-07-15 09:16:14.570501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.189 09:16:15 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.189 09:16:15 accel -- common/autotest_common.sh@862 -- # return 0 00:07:28.189 09:16:15 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:28.189 09:16:15 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:28.189 09:16:15 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:28.189 09:16:15 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:28.189 09:16:15 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:28.189 09:16:15 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:28.189 09:16:15 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.189 09:16:15 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:28.189 09:16:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.189 09:16:15 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.189 09:16:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.189 09:16:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.189 09:16:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.189 09:16:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.189 09:16:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.189 09:16:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.189 09:16:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.189 09:16:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.189 09:16:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.189 09:16:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.189 09:16:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.189 09:16:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.189 09:16:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.189 09:16:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.189 09:16:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.189 09:16:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.189 09:16:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.189 09:16:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.189 09:16:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.189 09:16:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.189 09:16:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.189 09:16:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.189 09:16:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.189 09:16:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.189 09:16:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.189 09:16:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.189 09:16:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.189 09:16:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.189 09:16:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.189 09:16:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.189 09:16:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.189 09:16:15 accel -- accel/accel.sh@75 -- # killprocess 485381 00:07:28.189 09:16:15 accel -- common/autotest_common.sh@948 -- # '[' -z 485381 ']' 00:07:28.189 09:16:15 accel -- common/autotest_common.sh@952 -- # kill -0 485381 00:07:28.189 09:16:15 accel -- common/autotest_common.sh@953 -- # uname 00:07:28.189 09:16:15 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:28.189 09:16:15 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 485381 00:07:28.189 09:16:15 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:28.189 09:16:15 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:28.189 09:16:15 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 485381' 00:07:28.189 killing process with pid 485381 00:07:28.189 09:16:15 accel -- common/autotest_common.sh@967 -- # kill 485381 00:07:28.189 09:16:15 accel -- common/autotest_common.sh@972 -- # wait 485381 00:07:28.451 09:16:15 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:28.451 09:16:15 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:28.451 09:16:15 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:28.451 09:16:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.451 09:16:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.451 09:16:15 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:28.451 09:16:15 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:28.451 09:16:15 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:28.451 09:16:15 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.451 09:16:15 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.451 09:16:15 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.451 09:16:15 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.451 09:16:15 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.451 09:16:15 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:28.451 09:16:15 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:28.451 09:16:15 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.451 09:16:15 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:28.451 09:16:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.451 09:16:15 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:28.451 09:16:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:28.451 09:16:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.451 09:16:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.451 ************************************ 00:07:28.451 START TEST accel_missing_filename 00:07:28.451 ************************************ 00:07:28.451 09:16:15 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:28.451 09:16:15 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:28.451 09:16:15 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:28.451 09:16:15 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:28.451 09:16:15 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.451 09:16:15 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:28.451 09:16:15 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.451 09:16:15 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:28.451 09:16:15 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:28.451 09:16:15 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:28.451 09:16:15 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.451 09:16:15 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.451 09:16:15 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.451 09:16:15 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.451 09:16:15 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.451 09:16:15 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:28.712 09:16:15 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:28.712 [2024-07-15 09:16:15.674125] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:28.712 [2024-07-15 09:16:15.674217] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485611 ] 00:07:28.712 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.712 [2024-07-15 09:16:15.742313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.712 [2024-07-15 09:16:15.806867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.713 [2024-07-15 09:16:15.838462] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.713 [2024-07-15 09:16:15.875387] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:28.974 A filename is required. 00:07:28.974 09:16:15 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:28.974 09:16:15 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.974 09:16:15 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:28.974 09:16:15 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:28.974 09:16:15 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:28.974 09:16:15 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.974 00:07:28.974 real 0m0.286s 00:07:28.974 user 0m0.218s 00:07:28.974 sys 0m0.111s 00:07:28.974 09:16:15 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.974 09:16:15 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:28.974 ************************************ 00:07:28.974 END TEST accel_missing_filename 00:07:28.974 ************************************ 00:07:28.974 09:16:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.974 09:16:15 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:28.974 09:16:15 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:28.974 09:16:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.974 09:16:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.974 ************************************ 00:07:28.974 START TEST accel_compress_verify 00:07:28.974 ************************************ 00:07:28.974 09:16:16 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:28.974 09:16:16 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:28.974 09:16:16 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:28.974 09:16:16 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:28.974 09:16:16 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.974 09:16:16 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:28.974 09:16:16 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.974 09:16:16 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:28.974 09:16:16 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:28.974 09:16:16 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:28.974 09:16:16 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.974 09:16:16 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.974 09:16:16 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.974 09:16:16 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.974 09:16:16 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.974 09:16:16 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:28.974 09:16:16 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:28.974 [2024-07-15 09:16:16.034210] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:28.974 [2024-07-15 09:16:16.034280] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485748 ] 00:07:28.974 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.974 [2024-07-15 09:16:16.100472] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.974 [2024-07-15 09:16:16.163881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.236 [2024-07-15 09:16:16.195543] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.236 [2024-07-15 09:16:16.232488] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:29.236 00:07:29.236 Compression does not support the verify option, aborting. 00:07:29.236 09:16:16 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:29.236 09:16:16 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:29.236 09:16:16 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:29.236 09:16:16 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:29.236 09:16:16 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:29.236 09:16:16 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:29.236 00:07:29.236 real 0m0.284s 00:07:29.236 user 0m0.214s 00:07:29.236 sys 0m0.112s 00:07:29.236 09:16:16 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.236 09:16:16 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:29.236 ************************************ 00:07:29.236 END TEST accel_compress_verify 00:07:29.236 ************************************ 00:07:29.236 09:16:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.236 09:16:16 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:29.236 09:16:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:29.236 09:16:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.236 09:16:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.236 ************************************ 00:07:29.236 START TEST accel_wrong_workload 00:07:29.236 ************************************ 00:07:29.236 09:16:16 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:29.236 09:16:16 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:29.236 09:16:16 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:29.236 09:16:16 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:29.236 09:16:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.236 09:16:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:29.236 09:16:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.236 09:16:16 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:29.236 09:16:16 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:29.236 09:16:16 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:29.236 09:16:16 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.237 09:16:16 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.237 09:16:16 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.237 09:16:16 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.237 09:16:16 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.237 09:16:16 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:29.237 09:16:16 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:29.237 Unsupported workload type: foobar 00:07:29.237 [2024-07-15 09:16:16.390744] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:29.237 accel_perf options: 00:07:29.237 [-h help message] 00:07:29.237 [-q queue depth per core] 00:07:29.237 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:29.237 [-T number of threads per core 00:07:29.237 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:29.237 [-t time in seconds] 00:07:29.237 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:29.237 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:29.237 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:29.237 [-l for compress/decompress workloads, name of uncompressed input file 00:07:29.237 [-S for crc32c workload, use this seed value (default 0) 00:07:29.237 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:29.237 [-f for fill workload, use this BYTE value (default 255) 00:07:29.237 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:29.237 [-y verify result if this switch is on] 00:07:29.237 [-a tasks to allocate per core (default: same value as -q)] 00:07:29.237 Can be used to spread operations across a wider range of memory. 00:07:29.237 09:16:16 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:29.237 09:16:16 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:29.237 09:16:16 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:29.237 09:16:16 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:29.237 00:07:29.237 real 0m0.037s 00:07:29.237 user 0m0.024s 00:07:29.237 sys 0m0.013s 00:07:29.237 09:16:16 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.237 09:16:16 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:29.237 ************************************ 00:07:29.237 END TEST accel_wrong_workload 00:07:29.237 ************************************ 00:07:29.237 Error: writing output failed: Broken pipe 00:07:29.237 09:16:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.237 09:16:16 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:29.237 09:16:16 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:29.237 09:16:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.237 09:16:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.498 ************************************ 00:07:29.498 START TEST accel_negative_buffers 00:07:29.498 ************************************ 00:07:29.498 09:16:16 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:29.498 09:16:16 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:29.498 09:16:16 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:29.498 09:16:16 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:29.498 09:16:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.498 09:16:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:29.498 09:16:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.498 09:16:16 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:29.498 09:16:16 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:29.498 09:16:16 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:29.498 09:16:16 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.498 09:16:16 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.498 09:16:16 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.498 09:16:16 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.498 09:16:16 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.498 09:16:16 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:29.498 09:16:16 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:29.498 -x option must be non-negative. 00:07:29.498 [2024-07-15 09:16:16.499960] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:29.498 accel_perf options: 00:07:29.498 [-h help message] 00:07:29.499 [-q queue depth per core] 00:07:29.499 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:29.499 [-T number of threads per core 00:07:29.499 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:29.499 [-t time in seconds] 00:07:29.499 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:29.499 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:29.499 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:29.499 [-l for compress/decompress workloads, name of uncompressed input file 00:07:29.499 [-S for crc32c workload, use this seed value (default 0) 00:07:29.499 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:29.499 [-f for fill workload, use this BYTE value (default 255) 00:07:29.499 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:29.499 [-y verify result if this switch is on] 00:07:29.499 [-a tasks to allocate per core (default: same value as -q)] 00:07:29.499 Can be used to spread operations across a wider range of memory. 00:07:29.499 09:16:16 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:29.499 09:16:16 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:29.499 09:16:16 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:29.499 09:16:16 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:29.499 00:07:29.499 real 0m0.035s 00:07:29.499 user 0m0.019s 00:07:29.499 sys 0m0.016s 00:07:29.499 09:16:16 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.499 09:16:16 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:29.499 ************************************ 00:07:29.499 END TEST accel_negative_buffers 00:07:29.499 ************************************ 00:07:29.499 Error: writing output failed: Broken pipe 00:07:29.499 09:16:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.499 09:16:16 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:29.499 09:16:16 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:29.499 09:16:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.499 09:16:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.499 ************************************ 00:07:29.499 START TEST accel_crc32c 00:07:29.499 ************************************ 00:07:29.499 09:16:16 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:29.499 09:16:16 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:29.499 09:16:16 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:29.499 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.499 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.499 09:16:16 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:29.499 09:16:16 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:29.499 09:16:16 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:29.499 09:16:16 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.499 09:16:16 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.499 09:16:16 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.499 09:16:16 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.499 09:16:16 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.499 09:16:16 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:29.499 09:16:16 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:29.499 [2024-07-15 09:16:16.609876] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:29.499 [2024-07-15 09:16:16.609942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486020 ] 00:07:29.499 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.499 [2024-07-15 09:16:16.677325] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.760 [2024-07-15 09:16:16.741062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.760 09:16:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.761 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.761 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.761 09:16:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:29.761 09:16:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.761 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.761 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.761 09:16:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.761 09:16:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.761 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.761 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.761 09:16:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.761 09:16:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.761 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.761 09:16:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:30.704 09:16:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.704 00:07:30.704 real 0m1.288s 00:07:30.704 user 0m1.192s 00:07:30.704 sys 0m0.107s 00:07:30.704 09:16:17 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.704 09:16:17 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:30.704 ************************************ 00:07:30.704 END TEST accel_crc32c 00:07:30.704 ************************************ 00:07:30.965 09:16:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.965 09:16:17 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:30.965 09:16:17 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:30.965 09:16:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.965 09:16:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.965 ************************************ 00:07:30.965 START TEST accel_crc32c_C2 00:07:30.965 ************************************ 00:07:30.965 09:16:17 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:30.965 09:16:17 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.965 09:16:17 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:30.965 09:16:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.965 09:16:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.965 09:16:17 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:30.965 09:16:17 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:30.965 09:16:17 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.965 09:16:17 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.965 09:16:17 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.965 09:16:17 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.965 09:16:17 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.965 09:16:17 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.965 09:16:17 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:30.965 09:16:17 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:30.965 [2024-07-15 09:16:17.972394] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:30.965 [2024-07-15 09:16:17.972459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486277 ] 00:07:30.965 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.965 [2024-07-15 09:16:18.043027] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.965 [2024-07-15 09:16:18.113675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.965 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.966 09:16:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.354 00:07:32.354 real 0m1.299s 00:07:32.354 user 0m1.201s 00:07:32.354 sys 0m0.110s 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.354 09:16:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:32.354 ************************************ 00:07:32.354 END TEST accel_crc32c_C2 00:07:32.354 ************************************ 00:07:32.354 09:16:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.354 09:16:19 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:32.354 09:16:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:32.354 09:16:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.354 09:16:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.354 ************************************ 00:07:32.354 START TEST accel_copy 00:07:32.354 ************************************ 00:07:32.354 09:16:19 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:32.354 09:16:19 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:32.354 09:16:19 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:32.354 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.354 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.354 09:16:19 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:32.354 09:16:19 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:32.354 09:16:19 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:32.354 09:16:19 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.354 09:16:19 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:32.355 [2024-07-15 09:16:19.345244] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:32.355 [2024-07-15 09:16:19.345309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486476 ] 00:07:32.355 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.355 [2024-07-15 09:16:19.415191] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.355 [2024-07-15 09:16:19.486456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.355 09:16:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:33.742 09:16:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.743 09:16:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.743 09:16:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.743 09:16:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.743 09:16:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:33.743 09:16:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.743 00:07:33.743 real 0m1.299s 00:07:33.743 user 0m1.194s 00:07:33.743 sys 0m0.116s 00:07:33.743 09:16:20 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.743 09:16:20 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:33.743 ************************************ 00:07:33.743 END TEST accel_copy 00:07:33.743 ************************************ 00:07:33.743 09:16:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.743 09:16:20 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:33.743 09:16:20 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:33.743 09:16:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.743 09:16:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.743 ************************************ 00:07:33.743 START TEST accel_fill 00:07:33.743 ************************************ 00:07:33.743 09:16:20 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:33.743 [2024-07-15 09:16:20.717740] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:33.743 [2024-07-15 09:16:20.717813] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486759 ] 00:07:33.743 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.743 [2024-07-15 09:16:20.785695] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.743 [2024-07-15 09:16:20.852524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.743 09:16:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:35.130 09:16:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.131 09:16:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:35.131 09:16:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.131 00:07:35.131 real 0m1.291s 00:07:35.131 user 0m1.197s 00:07:35.131 sys 0m0.106s 00:07:35.131 09:16:21 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.131 09:16:21 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:35.131 ************************************ 00:07:35.131 END TEST accel_fill 00:07:35.131 ************************************ 00:07:35.131 09:16:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.131 09:16:22 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:35.131 09:16:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:35.131 09:16:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.131 09:16:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.131 ************************************ 00:07:35.131 START TEST accel_copy_crc32c 00:07:35.131 ************************************ 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:35.131 [2024-07-15 09:16:22.082692] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:35.131 [2024-07-15 09:16:22.082785] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487106 ] 00:07:35.131 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.131 [2024-07-15 09:16:22.151393] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.131 [2024-07-15 09:16:22.216603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.131 09:16:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.517 00:07:36.517 real 0m1.291s 00:07:36.517 user 0m1.191s 00:07:36.517 sys 0m0.112s 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.517 09:16:23 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:36.517 ************************************ 00:07:36.517 END TEST accel_copy_crc32c 00:07:36.517 ************************************ 00:07:36.517 09:16:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.517 09:16:23 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:36.517 09:16:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:36.517 09:16:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.517 09:16:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.517 ************************************ 00:07:36.517 START TEST accel_copy_crc32c_C2 00:07:36.517 ************************************ 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:36.517 [2024-07-15 09:16:23.448434] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:36.517 [2024-07-15 09:16:23.448528] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487463 ] 00:07:36.517 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.517 [2024-07-15 09:16:23.518820] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.517 [2024-07-15 09:16:23.589966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.517 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.518 09:16:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.903 00:07:37.903 real 0m1.301s 00:07:37.903 user 0m1.206s 00:07:37.903 sys 0m0.108s 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.903 09:16:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:37.903 ************************************ 00:07:37.903 END TEST accel_copy_crc32c_C2 00:07:37.903 ************************************ 00:07:37.903 09:16:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.903 09:16:24 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:37.903 09:16:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:37.903 09:16:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.903 09:16:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.903 ************************************ 00:07:37.903 START TEST accel_dualcast 00:07:37.903 ************************************ 00:07:37.903 09:16:24 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:37.903 [2024-07-15 09:16:24.823972] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:37.903 [2024-07-15 09:16:24.824038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487752 ] 00:07:37.903 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.903 [2024-07-15 09:16:24.894185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.903 [2024-07-15 09:16:24.965672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.903 09:16:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.903 09:16:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.903 09:16:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.903 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.903 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.903 09:16:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.903 09:16:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.903 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.903 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.903 09:16:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:37.903 09:16:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.903 09:16:25 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:37.903 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.903 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.903 09:16:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.903 09:16:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.903 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.903 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.903 09:16:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.904 09:16:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:39.287 09:16:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.287 00:07:39.287 real 0m1.299s 00:07:39.287 user 0m1.198s 00:07:39.287 sys 0m0.113s 00:07:39.287 09:16:26 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.287 09:16:26 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:39.287 ************************************ 00:07:39.287 END TEST accel_dualcast 00:07:39.287 ************************************ 00:07:39.287 09:16:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.287 09:16:26 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:39.287 09:16:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:39.287 09:16:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.287 09:16:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.287 ************************************ 00:07:39.287 START TEST accel_compare 00:07:39.287 ************************************ 00:07:39.287 09:16:26 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:39.287 [2024-07-15 09:16:26.199449] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:39.287 [2024-07-15 09:16:26.199521] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487946 ] 00:07:39.287 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.287 [2024-07-15 09:16:26.269333] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.287 [2024-07-15 09:16:26.338431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.287 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.288 09:16:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.288 09:16:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.288 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.288 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.288 09:16:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.288 09:16:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.288 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.288 09:16:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:40.671 09:16:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.671 00:07:40.671 real 0m1.296s 00:07:40.671 user 0m1.206s 00:07:40.671 sys 0m0.101s 00:07:40.671 09:16:27 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.671 09:16:27 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:40.671 ************************************ 00:07:40.671 END TEST accel_compare 00:07:40.671 ************************************ 00:07:40.671 09:16:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.671 09:16:27 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:40.671 09:16:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:40.671 09:16:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.671 09:16:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.671 ************************************ 00:07:40.671 START TEST accel_xor 00:07:40.671 ************************************ 00:07:40.671 09:16:27 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:40.671 [2024-07-15 09:16:27.575234] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:40.671 [2024-07-15 09:16:27.575348] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488199 ] 00:07:40.671 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.671 [2024-07-15 09:16:27.654137] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.671 [2024-07-15 09:16:27.728028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.671 09:16:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.672 09:16:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:42.056 09:16:28 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.056 00:07:42.056 real 0m1.315s 00:07:42.056 user 0m1.202s 00:07:42.056 sys 0m0.124s 00:07:42.056 09:16:28 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.056 09:16:28 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:42.056 ************************************ 00:07:42.056 END TEST accel_xor 00:07:42.056 ************************************ 00:07:42.056 09:16:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.056 09:16:28 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:42.057 09:16:28 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:42.057 09:16:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.057 09:16:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.057 ************************************ 00:07:42.057 START TEST accel_xor 00:07:42.057 ************************************ 00:07:42.057 09:16:28 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:42.057 09:16:28 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:42.057 09:16:28 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:42.057 09:16:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.057 09:16:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.057 09:16:28 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:42.057 09:16:28 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:42.057 09:16:28 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:42.057 09:16:28 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.057 09:16:28 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.057 09:16:28 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.057 09:16:28 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.057 09:16:28 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.057 09:16:28 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:42.057 09:16:28 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:42.057 [2024-07-15 09:16:28.960343] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:42.057 [2024-07-15 09:16:28.960439] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488551 ] 00:07:42.057 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.057 [2024-07-15 09:16:29.030840] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.057 [2024-07-15 09:16:29.100041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.057 09:16:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:43.443 09:16:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.443 00:07:43.443 real 0m1.299s 00:07:43.443 user 0m1.205s 00:07:43.443 sys 0m0.106s 00:07:43.443 09:16:30 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.443 09:16:30 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:43.443 ************************************ 00:07:43.443 END TEST accel_xor 00:07:43.443 ************************************ 00:07:43.443 09:16:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.443 09:16:30 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:43.443 09:16:30 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:43.443 09:16:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.443 09:16:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.443 ************************************ 00:07:43.443 START TEST accel_dif_verify 00:07:43.443 ************************************ 00:07:43.443 09:16:30 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:43.443 [2024-07-15 09:16:30.337295] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:43.443 [2024-07-15 09:16:30.337413] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488905 ] 00:07:43.443 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.443 [2024-07-15 09:16:30.416637] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.443 [2024-07-15 09:16:30.485201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.443 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.444 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.444 09:16:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.444 09:16:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.444 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.444 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.444 09:16:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.444 09:16:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.444 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.444 09:16:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:44.893 09:16:31 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.893 00:07:44.893 real 0m1.309s 00:07:44.893 user 0m1.209s 00:07:44.893 sys 0m0.112s 00:07:44.893 09:16:31 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.893 09:16:31 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:44.893 ************************************ 00:07:44.893 END TEST accel_dif_verify 00:07:44.893 ************************************ 00:07:44.893 09:16:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:44.893 09:16:31 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:44.893 09:16:31 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:44.893 09:16:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.893 09:16:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.893 ************************************ 00:07:44.893 START TEST accel_dif_generate 00:07:44.893 ************************************ 00:07:44.893 09:16:31 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:44.893 [2024-07-15 09:16:31.719024] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:44.893 [2024-07-15 09:16:31.719092] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489257 ] 00:07:44.893 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.893 [2024-07-15 09:16:31.788805] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.893 [2024-07-15 09:16:31.863443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.893 09:16:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.894 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.894 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.894 09:16:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:44.894 09:16:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.894 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.894 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.894 09:16:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.894 09:16:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.894 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.894 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.894 09:16:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.894 09:16:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.894 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.894 09:16:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:45.836 09:16:32 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.836 00:07:45.836 real 0m1.302s 00:07:45.836 user 0m1.200s 00:07:45.836 sys 0m0.116s 00:07:45.836 09:16:32 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.836 09:16:32 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:45.836 ************************************ 00:07:45.836 END TEST accel_dif_generate 00:07:45.836 ************************************ 00:07:45.836 09:16:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.836 09:16:33 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:45.836 09:16:33 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:45.836 09:16:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.836 09:16:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.097 ************************************ 00:07:46.097 START TEST accel_dif_generate_copy 00:07:46.097 ************************************ 00:07:46.097 09:16:33 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:46.097 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:46.097 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:46.097 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.097 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.097 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:46.097 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:46.097 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:46.097 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.097 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.097 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.097 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.097 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.097 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:46.098 [2024-07-15 09:16:33.096834] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:46.098 [2024-07-15 09:16:33.096920] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489457 ] 00:07:46.098 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.098 [2024-07-15 09:16:33.168265] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.098 [2024-07-15 09:16:33.239716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.098 09:16:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.483 00:07:47.483 real 0m1.301s 00:07:47.483 user 0m1.208s 00:07:47.483 sys 0m0.104s 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.483 09:16:34 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:47.483 ************************************ 00:07:47.483 END TEST accel_dif_generate_copy 00:07:47.483 ************************************ 00:07:47.483 09:16:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:47.483 09:16:34 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:47.483 09:16:34 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:47.483 09:16:34 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:47.483 09:16:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.483 09:16:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.483 ************************************ 00:07:47.483 START TEST accel_comp 00:07:47.483 ************************************ 00:07:47.483 09:16:34 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:47.483 [2024-07-15 09:16:34.473774] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:47.483 [2024-07-15 09:16:34.473871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489664 ] 00:07:47.483 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.483 [2024-07-15 09:16:34.544198] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.483 [2024-07-15 09:16:34.613349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.483 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.484 09:16:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:48.870 09:16:35 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.870 00:07:48.870 real 0m1.302s 00:07:48.870 user 0m1.204s 00:07:48.870 sys 0m0.110s 00:07:48.870 09:16:35 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.870 09:16:35 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:48.870 ************************************ 00:07:48.870 END TEST accel_comp 00:07:48.870 ************************************ 00:07:48.870 09:16:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:48.870 09:16:35 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:48.870 09:16:35 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:48.870 09:16:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.870 09:16:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.870 ************************************ 00:07:48.870 START TEST accel_decomp 00:07:48.870 ************************************ 00:07:48.870 09:16:35 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:48.870 09:16:35 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:48.870 09:16:35 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:48.870 09:16:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.870 09:16:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.870 09:16:35 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:48.870 09:16:35 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:48.870 09:16:35 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:48.870 09:16:35 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.870 09:16:35 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.870 09:16:35 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.870 09:16:35 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.870 09:16:35 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.870 09:16:35 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:48.870 09:16:35 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:48.870 [2024-07-15 09:16:35.850783] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:48.870 [2024-07-15 09:16:35.850866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489997 ] 00:07:48.870 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.870 [2024-07-15 09:16:35.920405] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.870 [2024-07-15 09:16:35.986107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.870 09:16:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.870 09:16:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.870 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.870 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.870 09:16:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.870 09:16:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.870 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.871 09:16:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:50.258 09:16:37 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.258 00:07:50.258 real 0m1.295s 00:07:50.258 user 0m1.201s 00:07:50.258 sys 0m0.106s 00:07:50.258 09:16:37 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.258 09:16:37 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:50.258 ************************************ 00:07:50.258 END TEST accel_decomp 00:07:50.258 ************************************ 00:07:50.258 09:16:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:50.258 09:16:37 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:50.258 09:16:37 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:50.258 09:16:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.258 09:16:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.258 ************************************ 00:07:50.258 START TEST accel_decomp_full 00:07:50.258 ************************************ 00:07:50.258 09:16:37 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:50.258 [2024-07-15 09:16:37.220354] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:50.258 [2024-07-15 09:16:37.220417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490347 ] 00:07:50.258 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.258 [2024-07-15 09:16:37.288738] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.258 [2024-07-15 09:16:37.354168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.258 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.259 09:16:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:51.646 09:16:38 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.646 00:07:51.646 real 0m1.309s 00:07:51.646 user 0m1.216s 00:07:51.646 sys 0m0.106s 00:07:51.646 09:16:38 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.646 09:16:38 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:51.646 ************************************ 00:07:51.646 END TEST accel_decomp_full 00:07:51.646 ************************************ 00:07:51.646 09:16:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:51.646 09:16:38 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:51.646 09:16:38 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:51.646 09:16:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.646 09:16:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.646 ************************************ 00:07:51.646 START TEST accel_decomp_mcore 00:07:51.646 ************************************ 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:51.646 [2024-07-15 09:16:38.602120] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:51.646 [2024-07-15 09:16:38.602223] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490700 ] 00:07:51.646 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.646 [2024-07-15 09:16:38.677714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.646 [2024-07-15 09:16:38.748240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.646 [2024-07-15 09:16:38.748353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.646 [2024-07-15 09:16:38.748510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.646 [2024-07-15 09:16:38.748511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.646 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.647 09:16:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.034 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.034 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.034 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.034 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.034 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.034 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.034 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.034 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.034 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.034 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.034 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.034 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.035 00:07:53.035 real 0m1.315s 00:07:53.035 user 0m4.450s 00:07:53.035 sys 0m0.111s 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.035 09:16:39 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:53.035 ************************************ 00:07:53.035 END TEST accel_decomp_mcore 00:07:53.035 ************************************ 00:07:53.035 09:16:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:53.035 09:16:39 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:53.035 09:16:39 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:53.035 09:16:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.035 09:16:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.035 ************************************ 00:07:53.035 START TEST accel_decomp_full_mcore 00:07:53.035 ************************************ 00:07:53.035 09:16:39 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:53.035 09:16:39 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:53.035 09:16:39 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:53.035 09:16:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:39 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:53.035 09:16:39 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:53.035 09:16:39 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:53.035 09:16:39 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.035 09:16:39 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.035 09:16:39 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.035 09:16:39 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.035 09:16:39 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.035 09:16:39 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:53.035 09:16:39 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:53.035 [2024-07-15 09:16:39.988596] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:53.035 [2024-07-15 09:16:39.988675] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490931 ] 00:07:53.035 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.035 [2024-07-15 09:16:40.058240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.035 [2024-07-15 09:16:40.128170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.035 [2024-07-15 09:16:40.128285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.035 [2024-07-15 09:16:40.128441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.035 [2024-07-15 09:16:40.128442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.035 09:16:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.424 00:07:54.424 real 0m1.319s 00:07:54.424 user 0m4.488s 00:07:54.424 sys 0m0.116s 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.424 09:16:41 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:54.424 ************************************ 00:07:54.424 END TEST accel_decomp_full_mcore 00:07:54.424 ************************************ 00:07:54.424 09:16:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:54.424 09:16:41 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:54.424 09:16:41 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:54.424 09:16:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.424 09:16:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.424 ************************************ 00:07:54.424 START TEST accel_decomp_mthread 00:07:54.424 ************************************ 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:54.424 [2024-07-15 09:16:41.384520] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:54.424 [2024-07-15 09:16:41.384613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491128 ] 00:07:54.424 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.424 [2024-07-15 09:16:41.455189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.424 [2024-07-15 09:16:41.526322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.425 09:16:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.812 00:07:55.812 real 0m1.308s 00:07:55.812 user 0m1.211s 00:07:55.812 sys 0m0.109s 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.812 09:16:42 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:55.812 ************************************ 00:07:55.812 END TEST accel_decomp_mthread 00:07:55.812 ************************************ 00:07:55.812 09:16:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:55.812 09:16:42 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.812 09:16:42 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:55.812 09:16:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.812 09:16:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:55.812 ************************************ 00:07:55.812 START TEST accel_decomp_full_mthread 00:07:55.812 ************************************ 00:07:55.812 09:16:42 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.812 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:55.812 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:55.812 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.812 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.812 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.812 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.812 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:55.812 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.812 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.812 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.812 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.812 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.812 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:55.812 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:55.813 [2024-07-15 09:16:42.766063] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:55.813 [2024-07-15 09:16:42.766174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491445 ] 00:07:55.813 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.813 [2024-07-15 09:16:42.844360] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.813 [2024-07-15 09:16:42.916610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.813 09:16:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.194 00:07:57.194 real 0m1.346s 00:07:57.194 user 0m1.243s 00:07:57.194 sys 0m0.115s 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.194 09:16:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:57.194 ************************************ 00:07:57.194 END TEST accel_decomp_full_mthread 00:07:57.194 ************************************ 00:07:57.194 09:16:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:57.194 09:16:44 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:57.194 09:16:44 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:57.194 09:16:44 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:57.194 09:16:44 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:57.194 09:16:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.194 09:16:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.194 09:16:44 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.194 09:16:44 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.194 09:16:44 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.194 09:16:44 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.194 09:16:44 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.194 09:16:44 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:57.194 09:16:44 accel -- accel/accel.sh@41 -- # jq -r . 00:07:57.194 ************************************ 00:07:57.194 START TEST accel_dif_functional_tests 00:07:57.194 ************************************ 00:07:57.194 09:16:44 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:57.194 [2024-07-15 09:16:44.208224] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:57.194 [2024-07-15 09:16:44.208274] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491799 ] 00:07:57.194 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.194 [2024-07-15 09:16:44.277093] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:57.194 [2024-07-15 09:16:44.350255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.195 [2024-07-15 09:16:44.350375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.195 [2024-07-15 09:16:44.350378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.455 00:07:57.455 00:07:57.455 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.455 http://cunit.sourceforge.net/ 00:07:57.455 00:07:57.455 00:07:57.455 Suite: accel_dif 00:07:57.455 Test: verify: DIF generated, GUARD check ...passed 00:07:57.455 Test: verify: DIF generated, APPTAG check ...passed 00:07:57.455 Test: verify: DIF generated, REFTAG check ...passed 00:07:57.455 Test: verify: DIF not generated, GUARD check ...[2024-07-15 09:16:44.406059] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:57.455 passed 00:07:57.455 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 09:16:44.406101] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:57.455 passed 00:07:57.455 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 09:16:44.406122] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:57.455 passed 00:07:57.455 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:57.455 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 09:16:44.406174] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:57.455 passed 00:07:57.455 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:57.455 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:57.455 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:57.455 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 09:16:44.406285] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:57.455 passed 00:07:57.455 Test: verify copy: DIF generated, GUARD check ...passed 00:07:57.455 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:57.455 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:57.455 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 09:16:44.406409] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:57.455 passed 00:07:57.455 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 09:16:44.406433] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:57.455 passed 00:07:57.455 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 09:16:44.406454] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:57.455 passed 00:07:57.455 Test: generate copy: DIF generated, GUARD check ...passed 00:07:57.455 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:57.455 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:57.455 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:57.455 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:57.455 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:57.455 Test: generate copy: iovecs-len validate ...[2024-07-15 09:16:44.406638] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:57.455 passed 00:07:57.455 Test: generate copy: buffer alignment validate ...passed 00:07:57.455 00:07:57.455 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.455 suites 1 1 n/a 0 0 00:07:57.455 tests 26 26 26 0 0 00:07:57.455 asserts 115 115 115 0 n/a 00:07:57.455 00:07:57.455 Elapsed time = 0.000 seconds 00:07:57.455 00:07:57.455 real 0m0.364s 00:07:57.455 user 0m0.486s 00:07:57.455 sys 0m0.140s 00:07:57.455 09:16:44 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.455 09:16:44 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:57.455 ************************************ 00:07:57.455 END TEST accel_dif_functional_tests 00:07:57.455 ************************************ 00:07:57.455 09:16:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:57.455 00:07:57.455 real 0m30.299s 00:07:57.455 user 0m33.741s 00:07:57.455 sys 0m4.306s 00:07:57.455 09:16:44 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.455 09:16:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.455 ************************************ 00:07:57.455 END TEST accel 00:07:57.455 ************************************ 00:07:57.455 09:16:44 -- common/autotest_common.sh@1142 -- # return 0 00:07:57.456 09:16:44 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:57.456 09:16:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:57.456 09:16:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.456 09:16:44 -- common/autotest_common.sh@10 -- # set +x 00:07:57.456 ************************************ 00:07:57.456 START TEST accel_rpc 00:07:57.456 ************************************ 00:07:57.456 09:16:44 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:57.716 * Looking for test storage... 00:07:57.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:57.716 09:16:44 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:57.716 09:16:44 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=491898 00:07:57.716 09:16:44 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 491898 00:07:57.716 09:16:44 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:57.716 09:16:44 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 491898 ']' 00:07:57.716 09:16:44 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.716 09:16:44 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:57.716 09:16:44 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.716 09:16:44 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:57.716 09:16:44 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.716 [2024-07-15 09:16:44.799798] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:57.716 [2024-07-15 09:16:44.799863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491898 ] 00:07:57.716 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.716 [2024-07-15 09:16:44.873349] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.977 [2024-07-15 09:16:44.948227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.548 09:16:45 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.549 09:16:45 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:58.549 09:16:45 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:58.549 09:16:45 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:58.549 09:16:45 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:58.549 09:16:45 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:58.549 09:16:45 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:58.549 09:16:45 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:58.549 09:16:45 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.549 09:16:45 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.549 ************************************ 00:07:58.549 START TEST accel_assign_opcode 00:07:58.549 ************************************ 00:07:58.549 09:16:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:58.549 09:16:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:58.549 09:16:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.549 09:16:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.549 [2024-07-15 09:16:45.618182] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:58.549 09:16:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.549 09:16:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:58.549 09:16:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.549 09:16:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.549 [2024-07-15 09:16:45.630207] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:58.549 09:16:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.549 09:16:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:58.549 09:16:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.549 09:16:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.837 09:16:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.837 09:16:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:58.837 09:16:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:58.837 09:16:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.837 09:16:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:58.837 09:16:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.837 09:16:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.837 software 00:07:58.837 00:07:58.837 real 0m0.212s 00:07:58.837 user 0m0.049s 00:07:58.837 sys 0m0.012s 00:07:58.837 09:16:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.837 09:16:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.837 ************************************ 00:07:58.837 END TEST accel_assign_opcode 00:07:58.837 ************************************ 00:07:58.837 09:16:45 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:58.837 09:16:45 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 491898 00:07:58.837 09:16:45 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 491898 ']' 00:07:58.837 09:16:45 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 491898 00:07:58.837 09:16:45 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:58.837 09:16:45 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.837 09:16:45 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 491898 00:07:58.837 09:16:45 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:58.837 09:16:45 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:58.837 09:16:45 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 491898' 00:07:58.837 killing process with pid 491898 00:07:58.837 09:16:45 accel_rpc -- common/autotest_common.sh@967 -- # kill 491898 00:07:58.837 09:16:45 accel_rpc -- common/autotest_common.sh@972 -- # wait 491898 00:07:59.098 00:07:59.098 real 0m1.484s 00:07:59.098 user 0m1.548s 00:07:59.098 sys 0m0.442s 00:07:59.098 09:16:46 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.098 09:16:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.098 ************************************ 00:07:59.098 END TEST accel_rpc 00:07:59.098 ************************************ 00:07:59.098 09:16:46 -- common/autotest_common.sh@1142 -- # return 0 00:07:59.098 09:16:46 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:59.098 09:16:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:59.098 09:16:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.098 09:16:46 -- common/autotest_common.sh@10 -- # set +x 00:07:59.098 ************************************ 00:07:59.098 START TEST app_cmdline 00:07:59.098 ************************************ 00:07:59.098 09:16:46 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:59.098 * Looking for test storage... 00:07:59.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:59.357 09:16:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:59.357 09:16:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=492273 00:07:59.357 09:16:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 492273 00:07:59.357 09:16:46 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:59.357 09:16:46 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 492273 ']' 00:07:59.357 09:16:46 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.357 09:16:46 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.357 09:16:46 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.357 09:16:46 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.357 09:16:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:59.357 [2024-07-15 09:16:46.356389] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:59.357 [2024-07-15 09:16:46.356442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492273 ] 00:07:59.357 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.357 [2024-07-15 09:16:46.424135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.357 [2024-07-15 09:16:46.493150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.926 09:16:47 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.926 09:16:47 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:59.926 09:16:47 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:00.186 { 00:08:00.186 "version": "SPDK v24.09-pre git sha1 a22f117fe", 00:08:00.186 "fields": { 00:08:00.186 "major": 24, 00:08:00.186 "minor": 9, 00:08:00.186 "patch": 0, 00:08:00.186 "suffix": "-pre", 00:08:00.186 "commit": "a22f117fe" 00:08:00.186 } 00:08:00.186 } 00:08:00.186 09:16:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:00.186 09:16:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:00.186 09:16:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:00.186 09:16:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:00.186 09:16:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:00.186 09:16:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:00.186 09:16:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:00.186 09:16:47 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.186 09:16:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:00.186 09:16:47 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.186 09:16:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:00.186 09:16:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:00.186 09:16:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.186 09:16:47 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:00.186 09:16:47 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.186 09:16:47 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.186 09:16:47 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.186 09:16:47 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.186 09:16:47 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.186 09:16:47 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.186 09:16:47 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.186 09:16:47 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.186 09:16:47 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:00.186 09:16:47 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.446 request: 00:08:00.446 { 00:08:00.446 "method": "env_dpdk_get_mem_stats", 00:08:00.446 "req_id": 1 00:08:00.446 } 00:08:00.446 Got JSON-RPC error response 00:08:00.446 response: 00:08:00.446 { 00:08:00.446 "code": -32601, 00:08:00.446 "message": "Method not found" 00:08:00.446 } 00:08:00.446 09:16:47 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:00.446 09:16:47 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:00.446 09:16:47 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:00.446 09:16:47 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:00.446 09:16:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 492273 00:08:00.446 09:16:47 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 492273 ']' 00:08:00.446 09:16:47 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 492273 00:08:00.446 09:16:47 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:00.446 09:16:47 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:00.446 09:16:47 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 492273 00:08:00.446 09:16:47 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:00.446 09:16:47 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:00.446 09:16:47 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 492273' 00:08:00.446 killing process with pid 492273 00:08:00.446 09:16:47 app_cmdline -- common/autotest_common.sh@967 -- # kill 492273 00:08:00.446 09:16:47 app_cmdline -- common/autotest_common.sh@972 -- # wait 492273 00:08:00.706 00:08:00.706 real 0m1.533s 00:08:00.706 user 0m1.815s 00:08:00.706 sys 0m0.407s 00:08:00.706 09:16:47 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.706 09:16:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:00.706 ************************************ 00:08:00.706 END TEST app_cmdline 00:08:00.706 ************************************ 00:08:00.706 09:16:47 -- common/autotest_common.sh@1142 -- # return 0 00:08:00.706 09:16:47 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:00.706 09:16:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:00.706 09:16:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.706 09:16:47 -- common/autotest_common.sh@10 -- # set +x 00:08:00.706 ************************************ 00:08:00.706 START TEST version 00:08:00.706 ************************************ 00:08:00.706 09:16:47 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:00.706 * Looking for test storage... 00:08:00.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:00.967 09:16:47 version -- app/version.sh@17 -- # get_header_version major 00:08:00.967 09:16:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:00.967 09:16:47 version -- app/version.sh@14 -- # cut -f2 00:08:00.967 09:16:47 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.967 09:16:47 version -- app/version.sh@17 -- # major=24 00:08:00.967 09:16:47 version -- app/version.sh@18 -- # get_header_version minor 00:08:00.967 09:16:47 version -- app/version.sh@14 -- # cut -f2 00:08:00.967 09:16:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:00.967 09:16:47 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.967 09:16:47 version -- app/version.sh@18 -- # minor=9 00:08:00.967 09:16:47 version -- app/version.sh@19 -- # get_header_version patch 00:08:00.967 09:16:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:00.967 09:16:47 version -- app/version.sh@14 -- # cut -f2 00:08:00.967 09:16:47 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.967 09:16:47 version -- app/version.sh@19 -- # patch=0 00:08:00.967 09:16:47 version -- app/version.sh@20 -- # get_header_version suffix 00:08:00.967 09:16:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:00.967 09:16:47 version -- app/version.sh@14 -- # cut -f2 00:08:00.967 09:16:47 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.967 09:16:47 version -- app/version.sh@20 -- # suffix=-pre 00:08:00.967 09:16:47 version -- app/version.sh@22 -- # version=24.9 00:08:00.967 09:16:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:00.967 09:16:47 version -- app/version.sh@28 -- # version=24.9rc0 00:08:00.967 09:16:47 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:00.967 09:16:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:00.967 09:16:47 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:00.967 09:16:47 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:00.967 00:08:00.967 real 0m0.164s 00:08:00.967 user 0m0.081s 00:08:00.967 sys 0m0.120s 00:08:00.967 09:16:47 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.967 09:16:47 version -- common/autotest_common.sh@10 -- # set +x 00:08:00.967 ************************************ 00:08:00.967 END TEST version 00:08:00.967 ************************************ 00:08:00.967 09:16:48 -- common/autotest_common.sh@1142 -- # return 0 00:08:00.967 09:16:48 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:00.967 09:16:48 -- spdk/autotest.sh@198 -- # uname -s 00:08:00.967 09:16:48 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:00.967 09:16:48 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:00.967 09:16:48 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:00.967 09:16:48 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:00.967 09:16:48 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:00.967 09:16:48 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:00.967 09:16:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:00.967 09:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:00.967 09:16:48 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:00.967 09:16:48 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:00.967 09:16:48 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:00.967 09:16:48 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:00.967 09:16:48 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:00.967 09:16:48 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:00.967 09:16:48 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:00.967 09:16:48 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:00.967 09:16:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.967 09:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:00.967 ************************************ 00:08:00.967 START TEST nvmf_tcp 00:08:00.967 ************************************ 00:08:00.967 09:16:48 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:01.229 * Looking for test storage... 00:08:01.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.229 09:16:48 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.229 09:16:48 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.229 09:16:48 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.229 09:16:48 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.229 09:16:48 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.229 09:16:48 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.229 09:16:48 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:01.229 09:16:48 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:01.229 09:16:48 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:01.229 09:16:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:01.229 09:16:48 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:01.229 09:16:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:01.229 09:16:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.229 09:16:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:01.229 ************************************ 00:08:01.229 START TEST nvmf_example 00:08:01.229 ************************************ 00:08:01.229 09:16:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:01.229 * Looking for test storage... 00:08:01.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.229 09:16:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.229 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.230 09:16:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.491 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:01.491 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:01.491 09:16:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:01.491 09:16:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:09.627 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.627 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:09.627 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:09.627 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:09.627 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:09.627 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:09.627 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:09.627 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:09.628 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:09.628 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:09.628 Found net devices under 0000:31:00.0: cvl_0_0 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:09.628 Found net devices under 0000:31:00.1: cvl_0_1 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:09.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.814 ms 00:08:09.628 00:08:09.628 --- 10.0.0.2 ping statistics --- 00:08:09.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.628 rtt min/avg/max/mdev = 0.814/0.814/0.814/0.000 ms 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:08:09.628 00:08:09.628 --- 10.0.0.1 ping statistics --- 00:08:09.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.628 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=497050 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 497050 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 497050 ']' 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.628 09:16:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.629 09:16:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.629 09:16:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.629 09:16:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:09.629 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.201 09:16:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.201 09:16:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:08:10.201 09:16:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:10.201 09:16:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:10.201 09:16:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:10.201 09:16:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.201 09:16:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.201 09:16:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:10.201 09:16:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.201 09:16:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:10.201 09:16:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.201 09:16:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:10.201 09:16:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.201 09:16:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:10.201 09:16:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:10.201 09:16:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.202 09:16:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:10.202 09:16:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.202 09:16:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:10.202 09:16:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:10.202 09:16:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.202 09:16:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:10.202 09:16:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.202 09:16:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.202 09:16:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.202 09:16:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:10.202 09:16:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.202 09:16:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:10.202 09:16:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:10.202 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.506 Initializing NVMe Controllers 00:08:22.506 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:22.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:22.506 Initialization complete. Launching workers. 00:08:22.506 ======================================================== 00:08:22.506 Latency(us) 00:08:22.506 Device Information : IOPS MiB/s Average min max 00:08:22.506 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18801.32 73.44 3403.53 610.67 15380.92 00:08:22.506 ======================================================== 00:08:22.506 Total : 18801.32 73.44 3403.53 610.67 15380.92 00:08:22.506 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:22.506 rmmod nvme_tcp 00:08:22.506 rmmod nvme_fabrics 00:08:22.506 rmmod nvme_keyring 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 497050 ']' 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 497050 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 497050 ']' 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 497050 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 497050 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 497050' 00:08:22.506 killing process with pid 497050 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 497050 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 497050 00:08:22.506 nvmf threads initialize successfully 00:08:22.506 bdev subsystem init successfully 00:08:22.506 created a nvmf target service 00:08:22.506 create targets's poll groups done 00:08:22.506 all subsystems of target started 00:08:22.506 nvmf target is running 00:08:22.506 all subsystems of target stopped 00:08:22.506 destroy targets's poll groups done 00:08:22.506 destroyed the nvmf target service 00:08:22.506 bdev subsystem finish successfully 00:08:22.506 nvmf threads destroy successfully 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.506 09:17:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.767 09:17:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:22.767 09:17:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:22.767 09:17:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:22.767 09:17:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:22.767 00:08:22.767 real 0m21.597s 00:08:22.767 user 0m46.497s 00:08:22.767 sys 0m6.841s 00:08:22.767 09:17:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.767 09:17:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:22.767 ************************************ 00:08:22.767 END TEST nvmf_example 00:08:22.767 ************************************ 00:08:22.767 09:17:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:22.767 09:17:09 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:22.767 09:17:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:22.767 09:17:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.767 09:17:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:22.767 ************************************ 00:08:22.767 START TEST nvmf_filesystem 00:08:22.767 ************************************ 00:08:22.767 09:17:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:23.030 * Looking for test storage... 00:08:23.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:23.030 09:17:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:23.031 #define SPDK_CONFIG_H 00:08:23.031 #define SPDK_CONFIG_APPS 1 00:08:23.031 #define SPDK_CONFIG_ARCH native 00:08:23.031 #undef SPDK_CONFIG_ASAN 00:08:23.031 #undef SPDK_CONFIG_AVAHI 00:08:23.031 #undef SPDK_CONFIG_CET 00:08:23.031 #define SPDK_CONFIG_COVERAGE 1 00:08:23.031 #define SPDK_CONFIG_CROSS_PREFIX 00:08:23.031 #undef SPDK_CONFIG_CRYPTO 00:08:23.031 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:23.031 #undef SPDK_CONFIG_CUSTOMOCF 00:08:23.031 #undef SPDK_CONFIG_DAOS 00:08:23.031 #define SPDK_CONFIG_DAOS_DIR 00:08:23.031 #define SPDK_CONFIG_DEBUG 1 00:08:23.031 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:23.031 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:23.031 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:23.031 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:23.031 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:23.031 #undef SPDK_CONFIG_DPDK_UADK 00:08:23.031 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:23.031 #define SPDK_CONFIG_EXAMPLES 1 00:08:23.031 #undef SPDK_CONFIG_FC 00:08:23.031 #define SPDK_CONFIG_FC_PATH 00:08:23.031 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:23.031 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:23.031 #undef SPDK_CONFIG_FUSE 00:08:23.031 #undef SPDK_CONFIG_FUZZER 00:08:23.031 #define SPDK_CONFIG_FUZZER_LIB 00:08:23.031 #undef SPDK_CONFIG_GOLANG 00:08:23.031 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:23.031 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:23.031 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:23.031 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:23.031 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:23.031 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:23.031 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:23.031 #define SPDK_CONFIG_IDXD 1 00:08:23.031 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:23.031 #undef SPDK_CONFIG_IPSEC_MB 00:08:23.031 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:23.031 #define SPDK_CONFIG_ISAL 1 00:08:23.031 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:23.031 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:23.031 #define SPDK_CONFIG_LIBDIR 00:08:23.031 #undef SPDK_CONFIG_LTO 00:08:23.031 #define SPDK_CONFIG_MAX_LCORES 128 00:08:23.031 #define SPDK_CONFIG_NVME_CUSE 1 00:08:23.031 #undef SPDK_CONFIG_OCF 00:08:23.031 #define SPDK_CONFIG_OCF_PATH 00:08:23.031 #define SPDK_CONFIG_OPENSSL_PATH 00:08:23.031 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:23.031 #define SPDK_CONFIG_PGO_DIR 00:08:23.031 #undef SPDK_CONFIG_PGO_USE 00:08:23.031 #define SPDK_CONFIG_PREFIX /usr/local 00:08:23.031 #undef SPDK_CONFIG_RAID5F 00:08:23.031 #undef SPDK_CONFIG_RBD 00:08:23.031 #define SPDK_CONFIG_RDMA 1 00:08:23.031 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:23.031 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:23.031 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:23.031 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:23.031 #define SPDK_CONFIG_SHARED 1 00:08:23.031 #undef SPDK_CONFIG_SMA 00:08:23.031 #define SPDK_CONFIG_TESTS 1 00:08:23.031 #undef SPDK_CONFIG_TSAN 00:08:23.031 #define SPDK_CONFIG_UBLK 1 00:08:23.031 #define SPDK_CONFIG_UBSAN 1 00:08:23.031 #undef SPDK_CONFIG_UNIT_TESTS 00:08:23.031 #undef SPDK_CONFIG_URING 00:08:23.031 #define SPDK_CONFIG_URING_PATH 00:08:23.031 #undef SPDK_CONFIG_URING_ZNS 00:08:23.031 #undef SPDK_CONFIG_USDT 00:08:23.031 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:23.031 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:23.031 #define SPDK_CONFIG_VFIO_USER 1 00:08:23.031 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:23.031 #define SPDK_CONFIG_VHOST 1 00:08:23.031 #define SPDK_CONFIG_VIRTIO 1 00:08:23.031 #undef SPDK_CONFIG_VTUNE 00:08:23.031 #define SPDK_CONFIG_VTUNE_DIR 00:08:23.031 #define SPDK_CONFIG_WERROR 1 00:08:23.031 #define SPDK_CONFIG_WPDK_DIR 00:08:23.031 #undef SPDK_CONFIG_XNVME 00:08:23.031 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:23.031 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:23.032 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 499850 ]] 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 499850 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.bpGFY6 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.bpGFY6/tests/target /tmp/spdk.bpGFY6 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953012224 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4331417600 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=123004948480 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370992640 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6366044160 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64682119168 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685494272 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864273920 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874198528 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9924608 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=353280 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=150528 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684961792 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685498368 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=536576 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:23.033 * Looking for test storage... 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.033 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=123004948480 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8580636672 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.034 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:23.295 09:17:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:31.440 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.440 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:31.440 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:31.441 Found net devices under 0000:31:00.0: cvl_0_0 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:31.441 Found net devices under 0000:31:00.1: cvl_0_1 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.441 09:17:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:31.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:08:31.441 00:08:31.441 --- 10.0.0.2 ping statistics --- 00:08:31.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.441 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:08:31.441 00:08:31.441 --- 10.0.0.1 ping statistics --- 00:08:31.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.441 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.441 ************************************ 00:08:31.441 START TEST nvmf_filesystem_no_in_capsule 00:08:31.441 ************************************ 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.441 09:17:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:31.442 09:17:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:31.442 09:17:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=504157 00:08:31.442 09:17:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 504157 00:08:31.442 09:17:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.442 09:17:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 504157 ']' 00:08:31.442 09:17:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.442 09:17:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.442 09:17:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.442 09:17:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.442 09:17:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:31.442 [2024-07-15 09:17:18.391422] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:08:31.442 [2024-07-15 09:17:18.391479] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.442 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.442 [2024-07-15 09:17:18.468814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.442 [2024-07-15 09:17:18.546353] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.442 [2024-07-15 09:17:18.546391] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.442 [2024-07-15 09:17:18.546399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.442 [2024-07-15 09:17:18.546405] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.442 [2024-07-15 09:17:18.546411] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.442 [2024-07-15 09:17:18.546558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.442 [2024-07-15 09:17:18.546677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.442 [2024-07-15 09:17:18.546837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.442 [2024-07-15 09:17:18.546837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.014 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.014 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:32.014 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.014 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:32.014 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:32.014 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.014 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:32.014 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:32.014 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.015 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:32.015 [2024-07-15 09:17:19.212317] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:32.276 Malloc1 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:32.276 [2024-07-15 09:17:19.341042] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:32.276 { 00:08:32.276 "name": "Malloc1", 00:08:32.276 "aliases": [ 00:08:32.276 "ad131405-0140-4e38-8e00-2f111bca2091" 00:08:32.276 ], 00:08:32.276 "product_name": "Malloc disk", 00:08:32.276 "block_size": 512, 00:08:32.276 "num_blocks": 1048576, 00:08:32.276 "uuid": "ad131405-0140-4e38-8e00-2f111bca2091", 00:08:32.276 "assigned_rate_limits": { 00:08:32.276 "rw_ios_per_sec": 0, 00:08:32.276 "rw_mbytes_per_sec": 0, 00:08:32.276 "r_mbytes_per_sec": 0, 00:08:32.276 "w_mbytes_per_sec": 0 00:08:32.276 }, 00:08:32.276 "claimed": true, 00:08:32.276 "claim_type": "exclusive_write", 00:08:32.276 "zoned": false, 00:08:32.276 "supported_io_types": { 00:08:32.276 "read": true, 00:08:32.276 "write": true, 00:08:32.276 "unmap": true, 00:08:32.276 "flush": true, 00:08:32.276 "reset": true, 00:08:32.276 "nvme_admin": false, 00:08:32.276 "nvme_io": false, 00:08:32.276 "nvme_io_md": false, 00:08:32.276 "write_zeroes": true, 00:08:32.276 "zcopy": true, 00:08:32.276 "get_zone_info": false, 00:08:32.276 "zone_management": false, 00:08:32.276 "zone_append": false, 00:08:32.276 "compare": false, 00:08:32.276 "compare_and_write": false, 00:08:32.276 "abort": true, 00:08:32.276 "seek_hole": false, 00:08:32.276 "seek_data": false, 00:08:32.276 "copy": true, 00:08:32.276 "nvme_iov_md": false 00:08:32.276 }, 00:08:32.276 "memory_domains": [ 00:08:32.276 { 00:08:32.276 "dma_device_id": "system", 00:08:32.276 "dma_device_type": 1 00:08:32.276 }, 00:08:32.276 { 00:08:32.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.276 "dma_device_type": 2 00:08:32.276 } 00:08:32.276 ], 00:08:32.276 "driver_specific": {} 00:08:32.276 } 00:08:32.276 ]' 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:32.276 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:34.188 09:17:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:34.188 09:17:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:34.188 09:17:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:34.188 09:17:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:34.188 09:17:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:36.099 09:17:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:36.099 09:17:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:36.099 09:17:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:36.099 09:17:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:36.099 09:17:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:36.099 09:17:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:36.099 09:17:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:36.099 09:17:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:36.099 09:17:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:36.099 09:17:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:36.099 09:17:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:36.099 09:17:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:36.099 09:17:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:36.099 09:17:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:36.099 09:17:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:36.099 09:17:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:36.099 09:17:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:36.099 09:17:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:36.670 09:17:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:37.613 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:37.613 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:37.613 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:37.613 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.613 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:37.613 ************************************ 00:08:37.613 START TEST filesystem_ext4 00:08:37.613 ************************************ 00:08:37.613 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:37.613 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:37.613 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:37.613 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:37.613 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:37.613 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:37.613 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:37.613 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:37.613 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:37.613 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:37.613 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:37.613 mke2fs 1.46.5 (30-Dec-2021) 00:08:37.613 Discarding device blocks: 0/522240 done 00:08:37.613 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:37.613 Filesystem UUID: 93306ebe-3745-4e29-97ec-594d3a807fad 00:08:37.613 Superblock backups stored on blocks: 00:08:37.613 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:37.613 00:08:37.613 Allocating group tables: 0/64 done 00:08:37.613 Writing inode tables: 0/64 done 00:08:40.914 Creating journal (8192 blocks): done 00:08:40.914 Writing superblocks and filesystem accounting information: 0/64 done 00:08:40.914 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 504157 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:40.914 00:08:40.914 real 0m3.204s 00:08:40.914 user 0m0.023s 00:08:40.914 sys 0m0.051s 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:40.914 ************************************ 00:08:40.914 END TEST filesystem_ext4 00:08:40.914 ************************************ 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:40.914 ************************************ 00:08:40.914 START TEST filesystem_btrfs 00:08:40.914 ************************************ 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:40.914 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:41.175 btrfs-progs v6.6.2 00:08:41.175 See https://btrfs.readthedocs.io for more information. 00:08:41.175 00:08:41.175 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:41.175 NOTE: several default settings have changed in version 5.15, please make sure 00:08:41.175 this does not affect your deployments: 00:08:41.175 - DUP for metadata (-m dup) 00:08:41.175 - enabled no-holes (-O no-holes) 00:08:41.175 - enabled free-space-tree (-R free-space-tree) 00:08:41.175 00:08:41.175 Label: (null) 00:08:41.175 UUID: 62facd87-877b-4d18-9095-da2d510352a1 00:08:41.175 Node size: 16384 00:08:41.175 Sector size: 4096 00:08:41.175 Filesystem size: 510.00MiB 00:08:41.175 Block group profiles: 00:08:41.175 Data: single 8.00MiB 00:08:41.175 Metadata: DUP 32.00MiB 00:08:41.175 System: DUP 8.00MiB 00:08:41.175 SSD detected: yes 00:08:41.175 Zoned device: no 00:08:41.175 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:41.175 Runtime features: free-space-tree 00:08:41.175 Checksum: crc32c 00:08:41.175 Number of devices: 1 00:08:41.175 Devices: 00:08:41.175 ID SIZE PATH 00:08:41.175 1 510.00MiB /dev/nvme0n1p1 00:08:41.175 00:08:41.175 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:41.175 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:41.747 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:41.747 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:41.747 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:41.747 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:41.747 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:41.747 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:41.747 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 504157 00:08:41.747 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:41.747 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:41.747 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:41.747 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:41.747 00:08:41.747 real 0m0.921s 00:08:41.747 user 0m0.025s 00:08:41.747 sys 0m0.063s 00:08:41.747 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.747 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:41.747 ************************************ 00:08:41.747 END TEST filesystem_btrfs 00:08:41.747 ************************************ 00:08:41.748 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:41.748 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:41.748 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:41.748 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.748 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:41.748 ************************************ 00:08:41.748 START TEST filesystem_xfs 00:08:41.748 ************************************ 00:08:41.748 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:41.748 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:41.748 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:41.748 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:41.748 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:41.748 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:41.748 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:41.748 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:41.748 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:41.748 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:41.748 09:17:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:42.009 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:42.009 = sectsz=512 attr=2, projid32bit=1 00:08:42.009 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:42.009 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:42.009 data = bsize=4096 blocks=130560, imaxpct=25 00:08:42.009 = sunit=0 swidth=0 blks 00:08:42.009 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:42.009 log =internal log bsize=4096 blocks=16384, version=2 00:08:42.009 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:42.009 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:42.952 Discarding blocks...Done. 00:08:42.952 09:17:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:42.952 09:17:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:44.866 09:17:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:44.866 09:17:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:44.866 09:17:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:44.866 09:17:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:44.866 09:17:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:44.866 09:17:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:44.866 09:17:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 504157 00:08:44.866 09:17:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:44.866 09:17:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:44.866 09:17:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:44.866 09:17:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:44.866 00:08:44.866 real 0m3.045s 00:08:44.866 user 0m0.029s 00:08:44.866 sys 0m0.048s 00:08:44.866 09:17:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.866 09:17:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:44.866 ************************************ 00:08:44.866 END TEST filesystem_xfs 00:08:44.866 ************************************ 00:08:44.866 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:44.866 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:45.125 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:45.125 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:45.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 504157 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 504157 ']' 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 504157 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 504157 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 504157' 00:08:45.385 killing process with pid 504157 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 504157 00:08:45.385 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 504157 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:45.646 00:08:45.646 real 0m14.400s 00:08:45.646 user 0m56.766s 00:08:45.646 sys 0m1.079s 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:45.646 ************************************ 00:08:45.646 END TEST nvmf_filesystem_no_in_capsule 00:08:45.646 ************************************ 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:45.646 ************************************ 00:08:45.646 START TEST nvmf_filesystem_in_capsule 00:08:45.646 ************************************ 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=507141 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 507141 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 507141 ']' 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.646 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:45.908 [2024-07-15 09:17:32.865480] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:08:45.908 [2024-07-15 09:17:32.865531] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.908 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.908 [2024-07-15 09:17:32.940872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.908 [2024-07-15 09:17:33.015290] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.908 [2024-07-15 09:17:33.015327] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.908 [2024-07-15 09:17:33.015335] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.908 [2024-07-15 09:17:33.015342] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.908 [2024-07-15 09:17:33.015347] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.908 [2024-07-15 09:17:33.015484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.908 [2024-07-15 09:17:33.015600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.908 [2024-07-15 09:17:33.015761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.908 [2024-07-15 09:17:33.015771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.477 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.477 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:46.477 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:46.477 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:46.477 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:46.739 [2024-07-15 09:17:33.694392] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:46.739 Malloc1 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:46.739 [2024-07-15 09:17:33.820002] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.739 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:46.739 { 00:08:46.739 "name": "Malloc1", 00:08:46.739 "aliases": [ 00:08:46.739 "f298bdf0-c6b4-4bb2-aa56-5d2f40f2a785" 00:08:46.739 ], 00:08:46.739 "product_name": "Malloc disk", 00:08:46.739 "block_size": 512, 00:08:46.739 "num_blocks": 1048576, 00:08:46.739 "uuid": "f298bdf0-c6b4-4bb2-aa56-5d2f40f2a785", 00:08:46.739 "assigned_rate_limits": { 00:08:46.739 "rw_ios_per_sec": 0, 00:08:46.739 "rw_mbytes_per_sec": 0, 00:08:46.739 "r_mbytes_per_sec": 0, 00:08:46.739 "w_mbytes_per_sec": 0 00:08:46.739 }, 00:08:46.739 "claimed": true, 00:08:46.739 "claim_type": "exclusive_write", 00:08:46.739 "zoned": false, 00:08:46.739 "supported_io_types": { 00:08:46.739 "read": true, 00:08:46.739 "write": true, 00:08:46.739 "unmap": true, 00:08:46.739 "flush": true, 00:08:46.739 "reset": true, 00:08:46.739 "nvme_admin": false, 00:08:46.739 "nvme_io": false, 00:08:46.739 "nvme_io_md": false, 00:08:46.739 "write_zeroes": true, 00:08:46.739 "zcopy": true, 00:08:46.739 "get_zone_info": false, 00:08:46.739 "zone_management": false, 00:08:46.739 "zone_append": false, 00:08:46.739 "compare": false, 00:08:46.739 "compare_and_write": false, 00:08:46.739 "abort": true, 00:08:46.739 "seek_hole": false, 00:08:46.739 "seek_data": false, 00:08:46.739 "copy": true, 00:08:46.739 "nvme_iov_md": false 00:08:46.739 }, 00:08:46.739 "memory_domains": [ 00:08:46.739 { 00:08:46.739 "dma_device_id": "system", 00:08:46.739 "dma_device_type": 1 00:08:46.739 }, 00:08:46.739 { 00:08:46.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.740 "dma_device_type": 2 00:08:46.740 } 00:08:46.740 ], 00:08:46.740 "driver_specific": {} 00:08:46.740 } 00:08:46.740 ]' 00:08:46.740 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:46.740 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:46.740 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:47.000 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:47.000 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:47.000 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:47.000 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:47.000 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:48.442 09:17:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:48.442 09:17:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:48.442 09:17:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:48.442 09:17:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:48.442 09:17:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:50.350 09:17:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:50.350 09:17:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:50.350 09:17:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:50.350 09:17:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:50.350 09:17:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:50.350 09:17:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:50.350 09:17:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:50.350 09:17:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:50.350 09:17:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:50.350 09:17:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:50.350 09:17:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:50.350 09:17:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:50.350 09:17:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:50.350 09:17:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:50.350 09:17:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:50.350 09:17:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:50.350 09:17:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:50.922 09:17:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:51.493 09:17:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:52.435 09:17:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:52.435 09:17:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:52.435 09:17:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:52.435 09:17:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.435 09:17:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:52.435 ************************************ 00:08:52.435 START TEST filesystem_in_capsule_ext4 00:08:52.435 ************************************ 00:08:52.435 09:17:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:52.435 09:17:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:52.435 09:17:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:52.435 09:17:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:52.435 09:17:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:52.435 09:17:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:52.435 09:17:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:52.435 09:17:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:52.435 09:17:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:52.435 09:17:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:52.435 09:17:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:52.435 mke2fs 1.46.5 (30-Dec-2021) 00:08:52.435 Discarding device blocks: 0/522240 done 00:08:52.695 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:52.695 Filesystem UUID: ca0af319-cb9a-4237-a852-cf20ace2a8c9 00:08:52.695 Superblock backups stored on blocks: 00:08:52.695 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:52.695 00:08:52.695 Allocating group tables: 0/64 done 00:08:52.695 Writing inode tables: 0/64 done 00:08:55.998 Creating journal (8192 blocks): done 00:08:55.998 Writing superblocks and filesystem accounting information: 0/64 done 00:08:55.998 00:08:55.998 09:17:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:55.998 09:17:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:56.258 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:56.258 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:56.258 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:56.258 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:56.258 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:56.258 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:56.258 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 507141 00:08:56.258 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:56.258 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:56.258 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:56.258 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:56.258 00:08:56.258 real 0m3.875s 00:08:56.258 user 0m0.023s 00:08:56.258 sys 0m0.053s 00:08:56.258 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:56.258 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:56.258 ************************************ 00:08:56.258 END TEST filesystem_in_capsule_ext4 00:08:56.258 ************************************ 00:08:56.519 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:56.519 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:56.519 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:56.519 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.519 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.519 ************************************ 00:08:56.519 START TEST filesystem_in_capsule_btrfs 00:08:56.519 ************************************ 00:08:56.519 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:56.519 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:56.519 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:56.519 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:56.519 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:56.519 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:56.519 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:56.519 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:56.519 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:56.519 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:56.519 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:56.519 btrfs-progs v6.6.2 00:08:56.519 See https://btrfs.readthedocs.io for more information. 00:08:56.519 00:08:56.519 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:56.519 NOTE: several default settings have changed in version 5.15, please make sure 00:08:56.519 this does not affect your deployments: 00:08:56.519 - DUP for metadata (-m dup) 00:08:56.519 - enabled no-holes (-O no-holes) 00:08:56.519 - enabled free-space-tree (-R free-space-tree) 00:08:56.519 00:08:56.519 Label: (null) 00:08:56.519 UUID: 617ff6d9-b970-4522-9777-6f9a568b4d83 00:08:56.519 Node size: 16384 00:08:56.519 Sector size: 4096 00:08:56.519 Filesystem size: 510.00MiB 00:08:56.519 Block group profiles: 00:08:56.519 Data: single 8.00MiB 00:08:56.519 Metadata: DUP 32.00MiB 00:08:56.519 System: DUP 8.00MiB 00:08:56.519 SSD detected: yes 00:08:56.519 Zoned device: no 00:08:56.519 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:56.519 Runtime features: free-space-tree 00:08:56.519 Checksum: crc32c 00:08:56.519 Number of devices: 1 00:08:56.519 Devices: 00:08:56.519 ID SIZE PATH 00:08:56.519 1 510.00MiB /dev/nvme0n1p1 00:08:56.519 00:08:56.519 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:56.519 09:17:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:57.462 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:57.462 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 507141 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:57.463 00:08:57.463 real 0m1.006s 00:08:57.463 user 0m0.019s 00:08:57.463 sys 0m0.069s 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:57.463 ************************************ 00:08:57.463 END TEST filesystem_in_capsule_btrfs 00:08:57.463 ************************************ 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:57.463 ************************************ 00:08:57.463 START TEST filesystem_in_capsule_xfs 00:08:57.463 ************************************ 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:57.463 09:17:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:57.723 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:57.724 = sectsz=512 attr=2, projid32bit=1 00:08:57.724 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:57.724 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:57.724 data = bsize=4096 blocks=130560, imaxpct=25 00:08:57.724 = sunit=0 swidth=0 blks 00:08:57.724 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:57.724 log =internal log bsize=4096 blocks=16384, version=2 00:08:57.724 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:57.724 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:58.295 Discarding blocks...Done. 00:08:58.295 09:17:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:58.295 09:17:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:00.209 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:00.470 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:00.470 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:00.470 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:00.470 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:00.470 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:00.470 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 507141 00:09:00.470 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:00.470 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:00.470 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:00.470 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:00.470 00:09:00.470 real 0m2.877s 00:09:00.470 user 0m0.035s 00:09:00.470 sys 0m0.044s 00:09:00.470 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.470 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:00.470 ************************************ 00:09:00.470 END TEST filesystem_in_capsule_xfs 00:09:00.470 ************************************ 00:09:00.470 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:00.470 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:00.470 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:00.470 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:00.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 507141 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 507141 ']' 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 507141 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 507141 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 507141' 00:09:00.732 killing process with pid 507141 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 507141 00:09:00.732 09:17:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 507141 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:00.993 00:09:00.993 real 0m15.250s 00:09:00.993 user 1m0.182s 00:09:00.993 sys 0m1.092s 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:00.993 ************************************ 00:09:00.993 END TEST nvmf_filesystem_in_capsule 00:09:00.993 ************************************ 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:00.993 rmmod nvme_tcp 00:09:00.993 rmmod nvme_fabrics 00:09:00.993 rmmod nvme_keyring 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.993 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.539 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:03.539 00:09:03.539 real 0m40.291s 00:09:03.539 user 1m59.422s 00:09:03.539 sys 0m8.237s 00:09:03.539 09:17:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.539 09:17:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:03.539 ************************************ 00:09:03.539 END TEST nvmf_filesystem 00:09:03.539 ************************************ 00:09:03.539 09:17:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:03.539 09:17:50 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:03.539 09:17:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:03.539 09:17:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.539 09:17:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:03.539 ************************************ 00:09:03.539 START TEST nvmf_target_discovery 00:09:03.539 ************************************ 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:03.539 * Looking for test storage... 00:09:03.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.539 09:17:50 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:03.540 09:17:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.684 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:11.685 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:11.685 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:11.685 Found net devices under 0000:31:00.0: cvl_0_0 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:11.685 Found net devices under 0000:31:00.1: cvl_0_1 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:11.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.747 ms 00:09:11.685 00:09:11.685 --- 10.0.0.2 ping statistics --- 00:09:11.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.685 rtt min/avg/max/mdev = 0.747/0.747/0.747/0.000 ms 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:11.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:09:11.685 00:09:11.685 --- 10.0.0.1 ping statistics --- 00:09:11.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.685 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=515016 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 515016 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 515016 ']' 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:11.685 09:17:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:11.685 [2024-07-15 09:17:58.560835] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:09:11.685 [2024-07-15 09:17:58.560892] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.685 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.685 [2024-07-15 09:17:58.636360] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:11.685 [2024-07-15 09:17:58.702120] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.685 [2024-07-15 09:17:58.702157] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.685 [2024-07-15 09:17:58.702165] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:11.685 [2024-07-15 09:17:58.702171] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:11.685 [2024-07-15 09:17:58.702177] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.685 [2024-07-15 09:17:58.702238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.685 [2024-07-15 09:17:58.702349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.686 [2024-07-15 09:17:58.702524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.686 [2024-07-15 09:17:58.702525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.257 [2024-07-15 09:17:59.364300] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.257 Null1 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.257 [2024-07-15 09:17:59.422099] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.257 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:12.258 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:12.258 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.258 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.258 Null2 00:09:12.258 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.258 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:12.258 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.258 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.258 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.258 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:12.258 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.258 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.519 Null3 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.519 Null4 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.519 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:09:12.780 00:09:12.780 Discovery Log Number of Records 6, Generation counter 6 00:09:12.780 =====Discovery Log Entry 0====== 00:09:12.780 trtype: tcp 00:09:12.780 adrfam: ipv4 00:09:12.780 subtype: current discovery subsystem 00:09:12.780 treq: not required 00:09:12.780 portid: 0 00:09:12.780 trsvcid: 4420 00:09:12.780 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:12.780 traddr: 10.0.0.2 00:09:12.780 eflags: explicit discovery connections, duplicate discovery information 00:09:12.780 sectype: none 00:09:12.780 =====Discovery Log Entry 1====== 00:09:12.780 trtype: tcp 00:09:12.780 adrfam: ipv4 00:09:12.780 subtype: nvme subsystem 00:09:12.780 treq: not required 00:09:12.780 portid: 0 00:09:12.780 trsvcid: 4420 00:09:12.780 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:12.780 traddr: 10.0.0.2 00:09:12.780 eflags: none 00:09:12.780 sectype: none 00:09:12.780 =====Discovery Log Entry 2====== 00:09:12.780 trtype: tcp 00:09:12.780 adrfam: ipv4 00:09:12.780 subtype: nvme subsystem 00:09:12.780 treq: not required 00:09:12.780 portid: 0 00:09:12.780 trsvcid: 4420 00:09:12.780 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:12.780 traddr: 10.0.0.2 00:09:12.780 eflags: none 00:09:12.780 sectype: none 00:09:12.780 =====Discovery Log Entry 3====== 00:09:12.780 trtype: tcp 00:09:12.780 adrfam: ipv4 00:09:12.780 subtype: nvme subsystem 00:09:12.780 treq: not required 00:09:12.780 portid: 0 00:09:12.780 trsvcid: 4420 00:09:12.780 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:12.780 traddr: 10.0.0.2 00:09:12.780 eflags: none 00:09:12.780 sectype: none 00:09:12.780 =====Discovery Log Entry 4====== 00:09:12.780 trtype: tcp 00:09:12.780 adrfam: ipv4 00:09:12.780 subtype: nvme subsystem 00:09:12.780 treq: not required 00:09:12.780 portid: 0 00:09:12.780 trsvcid: 4420 00:09:12.780 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:12.780 traddr: 10.0.0.2 00:09:12.780 eflags: none 00:09:12.780 sectype: none 00:09:12.780 =====Discovery Log Entry 5====== 00:09:12.780 trtype: tcp 00:09:12.780 adrfam: ipv4 00:09:12.780 subtype: discovery subsystem referral 00:09:12.780 treq: not required 00:09:12.780 portid: 0 00:09:12.780 trsvcid: 4430 00:09:12.780 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:12.780 traddr: 10.0.0.2 00:09:12.780 eflags: none 00:09:12.780 sectype: none 00:09:12.780 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:12.780 Perform nvmf subsystem discovery via RPC 00:09:12.780 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:12.780 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.780 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.780 [ 00:09:12.780 { 00:09:12.780 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:12.780 "subtype": "Discovery", 00:09:12.780 "listen_addresses": [ 00:09:12.780 { 00:09:12.780 "trtype": "TCP", 00:09:12.780 "adrfam": "IPv4", 00:09:12.780 "traddr": "10.0.0.2", 00:09:12.780 "trsvcid": "4420" 00:09:12.780 } 00:09:12.780 ], 00:09:12.780 "allow_any_host": true, 00:09:12.780 "hosts": [] 00:09:12.780 }, 00:09:12.780 { 00:09:12.780 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.780 "subtype": "NVMe", 00:09:12.780 "listen_addresses": [ 00:09:12.780 { 00:09:12.780 "trtype": "TCP", 00:09:12.780 "adrfam": "IPv4", 00:09:12.780 "traddr": "10.0.0.2", 00:09:12.780 "trsvcid": "4420" 00:09:12.780 } 00:09:12.780 ], 00:09:12.780 "allow_any_host": true, 00:09:12.780 "hosts": [], 00:09:12.780 "serial_number": "SPDK00000000000001", 00:09:12.781 "model_number": "SPDK bdev Controller", 00:09:12.781 "max_namespaces": 32, 00:09:12.781 "min_cntlid": 1, 00:09:12.781 "max_cntlid": 65519, 00:09:12.781 "namespaces": [ 00:09:12.781 { 00:09:12.781 "nsid": 1, 00:09:12.781 "bdev_name": "Null1", 00:09:12.781 "name": "Null1", 00:09:12.781 "nguid": "CDF414A0D8B84BC9AB19579D2E303247", 00:09:12.781 "uuid": "cdf414a0-d8b8-4bc9-ab19-579d2e303247" 00:09:12.781 } 00:09:12.781 ] 00:09:12.781 }, 00:09:12.781 { 00:09:12.781 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:12.781 "subtype": "NVMe", 00:09:12.781 "listen_addresses": [ 00:09:12.781 { 00:09:12.781 "trtype": "TCP", 00:09:12.781 "adrfam": "IPv4", 00:09:12.781 "traddr": "10.0.0.2", 00:09:12.781 "trsvcid": "4420" 00:09:12.781 } 00:09:12.781 ], 00:09:12.781 "allow_any_host": true, 00:09:12.781 "hosts": [], 00:09:12.781 "serial_number": "SPDK00000000000002", 00:09:12.781 "model_number": "SPDK bdev Controller", 00:09:12.781 "max_namespaces": 32, 00:09:12.781 "min_cntlid": 1, 00:09:12.781 "max_cntlid": 65519, 00:09:12.781 "namespaces": [ 00:09:12.781 { 00:09:12.781 "nsid": 1, 00:09:12.781 "bdev_name": "Null2", 00:09:12.781 "name": "Null2", 00:09:12.781 "nguid": "BED08A177BB6428487EB23C0A153AE96", 00:09:12.781 "uuid": "bed08a17-7bb6-4284-87eb-23c0a153ae96" 00:09:12.781 } 00:09:12.781 ] 00:09:12.781 }, 00:09:12.781 { 00:09:12.781 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:12.781 "subtype": "NVMe", 00:09:12.781 "listen_addresses": [ 00:09:12.781 { 00:09:12.781 "trtype": "TCP", 00:09:12.781 "adrfam": "IPv4", 00:09:12.781 "traddr": "10.0.0.2", 00:09:12.781 "trsvcid": "4420" 00:09:12.781 } 00:09:12.781 ], 00:09:12.781 "allow_any_host": true, 00:09:12.781 "hosts": [], 00:09:12.781 "serial_number": "SPDK00000000000003", 00:09:12.781 "model_number": "SPDK bdev Controller", 00:09:12.781 "max_namespaces": 32, 00:09:12.781 "min_cntlid": 1, 00:09:12.781 "max_cntlid": 65519, 00:09:12.781 "namespaces": [ 00:09:12.781 { 00:09:12.781 "nsid": 1, 00:09:12.781 "bdev_name": "Null3", 00:09:12.781 "name": "Null3", 00:09:12.781 "nguid": "EFA8548C145F4E5B96ED9F585B17D8C8", 00:09:12.781 "uuid": "efa8548c-145f-4e5b-96ed-9f585b17d8c8" 00:09:12.781 } 00:09:12.781 ] 00:09:12.781 }, 00:09:12.781 { 00:09:12.781 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:12.781 "subtype": "NVMe", 00:09:12.781 "listen_addresses": [ 00:09:12.781 { 00:09:12.781 "trtype": "TCP", 00:09:12.781 "adrfam": "IPv4", 00:09:12.781 "traddr": "10.0.0.2", 00:09:12.781 "trsvcid": "4420" 00:09:12.781 } 00:09:12.781 ], 00:09:12.781 "allow_any_host": true, 00:09:12.781 "hosts": [], 00:09:12.781 "serial_number": "SPDK00000000000004", 00:09:12.781 "model_number": "SPDK bdev Controller", 00:09:12.781 "max_namespaces": 32, 00:09:12.781 "min_cntlid": 1, 00:09:12.781 "max_cntlid": 65519, 00:09:12.781 "namespaces": [ 00:09:12.781 { 00:09:12.781 "nsid": 1, 00:09:12.781 "bdev_name": "Null4", 00:09:12.781 "name": "Null4", 00:09:12.781 "nguid": "B13EEA997B5C4B6B9DA2512C6436ADFF", 00:09:12.781 "uuid": "b13eea99-7b5c-4b6b-9da2-512c6436adff" 00:09:12.781 } 00:09:12.781 ] 00:09:12.781 } 00:09:12.781 ] 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:12.781 09:17:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:12.781 rmmod nvme_tcp 00:09:13.055 rmmod nvme_fabrics 00:09:13.055 rmmod nvme_keyring 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 515016 ']' 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 515016 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 515016 ']' 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 515016 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 515016 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 515016' 00:09:13.055 killing process with pid 515016 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 515016 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 515016 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.055 09:18:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.595 09:18:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:15.595 00:09:15.595 real 0m11.957s 00:09:15.595 user 0m8.511s 00:09:15.595 sys 0m6.198s 00:09:15.595 09:18:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:15.595 09:18:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:15.595 ************************************ 00:09:15.595 END TEST nvmf_target_discovery 00:09:15.595 ************************************ 00:09:15.595 09:18:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:15.595 09:18:02 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:15.595 09:18:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:15.595 09:18:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.595 09:18:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:15.595 ************************************ 00:09:15.595 START TEST nvmf_referrals 00:09:15.595 ************************************ 00:09:15.595 09:18:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:15.595 * Looking for test storage... 00:09:15.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:15.595 09:18:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.595 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:15.595 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.595 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.595 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.595 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.595 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.595 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.595 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.595 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.595 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.595 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.595 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:15.595 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:15.595 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.595 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.595 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:15.596 09:18:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:23.824 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.824 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:23.824 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:23.825 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:23.825 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:23.825 Found net devices under 0000:31:00.0: cvl_0_0 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:23.825 Found net devices under 0000:31:00.1: cvl_0_1 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:23.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.739 ms 00:09:23.825 00:09:23.825 --- 10.0.0.2 ping statistics --- 00:09:23.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.825 rtt min/avg/max/mdev = 0.739/0.739/0.739/0.000 ms 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:09:23.825 00:09:23.825 --- 10.0.0.1 ping statistics --- 00:09:23.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.825 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=520579 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 520579 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 520579 ']' 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:23.825 09:18:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:23.825 [2024-07-15 09:18:10.663217] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:09:23.825 [2024-07-15 09:18:10.663278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.825 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.825 [2024-07-15 09:18:10.737366] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:23.826 [2024-07-15 09:18:10.805093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.826 [2024-07-15 09:18:10.805127] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.826 [2024-07-15 09:18:10.805134] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.826 [2024-07-15 09:18:10.805141] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.826 [2024-07-15 09:18:10.805146] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.826 [2024-07-15 09:18:10.805298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.826 [2024-07-15 09:18:10.805424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.826 [2024-07-15 09:18:10.805578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.826 [2024-07-15 09:18:10.805579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:24.397 [2024-07-15 09:18:11.465416] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:24.397 [2024-07-15 09:18:11.481604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:24.397 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.658 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:24.658 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:24.658 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:24.658 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:24.658 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:24.658 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:24.658 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:24.658 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:24.658 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:24.659 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:24.659 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:24.659 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.659 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:24.659 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.659 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:24.659 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.659 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:24.659 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.659 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:24.659 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.659 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:24.659 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.920 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:24.920 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:24.920 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.920 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:24.920 09:18:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.920 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:24.920 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:24.920 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:24.920 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:24.920 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:24.920 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:24.920 09:18:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:24.920 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:25.182 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:25.182 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:25.182 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:25.182 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:25.182 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:25.182 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:25.182 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:25.182 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:25.182 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:25.182 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:25.182 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:25.182 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:25.182 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:25.443 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:25.704 09:18:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:25.704 rmmod nvme_tcp 00:09:25.965 rmmod nvme_fabrics 00:09:25.965 rmmod nvme_keyring 00:09:25.965 09:18:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:25.965 09:18:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:25.965 09:18:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:25.965 09:18:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 520579 ']' 00:09:25.965 09:18:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 520579 00:09:25.965 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 520579 ']' 00:09:25.965 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 520579 00:09:25.965 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:09:25.965 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:25.965 09:18:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 520579 00:09:25.965 09:18:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:25.965 09:18:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:25.965 09:18:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 520579' 00:09:25.965 killing process with pid 520579 00:09:25.965 09:18:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 520579 00:09:25.965 09:18:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 520579 00:09:25.965 09:18:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:25.965 09:18:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:25.965 09:18:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:25.965 09:18:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:25.965 09:18:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:25.965 09:18:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.965 09:18:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.965 09:18:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.511 09:18:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:28.511 00:09:28.511 real 0m12.859s 00:09:28.511 user 0m12.476s 00:09:28.511 sys 0m6.525s 00:09:28.511 09:18:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.511 09:18:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:28.511 ************************************ 00:09:28.511 END TEST nvmf_referrals 00:09:28.511 ************************************ 00:09:28.511 09:18:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:28.511 09:18:15 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:28.511 09:18:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:28.511 09:18:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.511 09:18:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.511 ************************************ 00:09:28.511 START TEST nvmf_connect_disconnect 00:09:28.511 ************************************ 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:28.511 * Looking for test storage... 00:09:28.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:28.511 09:18:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:36.670 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:36.670 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:36.670 Found net devices under 0000:31:00.0: cvl_0_0 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:36.670 Found net devices under 0000:31:00.1: cvl_0_1 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:36.670 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:36.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.726 ms 00:09:36.671 00:09:36.671 --- 10.0.0.2 ping statistics --- 00:09:36.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.671 rtt min/avg/max/mdev = 0.726/0.726/0.726/0.000 ms 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:36.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:09:36.671 00:09:36.671 --- 10.0.0.1 ping statistics --- 00:09:36.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.671 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=526080 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 526080 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 526080 ']' 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:36.671 09:18:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:36.671 [2024-07-15 09:18:23.723821] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:09:36.671 [2024-07-15 09:18:23.723886] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.671 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.671 [2024-07-15 09:18:23.801794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.941 [2024-07-15 09:18:23.877092] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.941 [2024-07-15 09:18:23.877129] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.941 [2024-07-15 09:18:23.877136] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.941 [2024-07-15 09:18:23.877146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.941 [2024-07-15 09:18:23.877152] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.941 [2024-07-15 09:18:23.877288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.941 [2024-07-15 09:18:23.877408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.941 [2024-07-15 09:18:23.877566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.941 [2024-07-15 09:18:23.877567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:37.512 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.512 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:09:37.512 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:37.513 [2024-07-15 09:18:24.544391] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:37.513 [2024-07-15 09:18:24.603561] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:37.513 09:18:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:41.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:55.945 rmmod nvme_tcp 00:09:55.945 rmmod nvme_fabrics 00:09:55.945 rmmod nvme_keyring 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 526080 ']' 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 526080 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 526080 ']' 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 526080 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 526080 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 526080' 00:09:55.945 killing process with pid 526080 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 526080 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 526080 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:55.945 09:18:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.857 09:18:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:57.857 00:09:57.857 real 0m29.640s 00:09:57.857 user 1m18.051s 00:09:57.857 sys 0m7.088s 00:09:57.857 09:18:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.857 09:18:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:57.857 ************************************ 00:09:57.857 END TEST nvmf_connect_disconnect 00:09:57.857 ************************************ 00:09:57.857 09:18:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:57.857 09:18:44 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:57.858 09:18:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:57.858 09:18:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.858 09:18:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:57.858 ************************************ 00:09:57.858 START TEST nvmf_multitarget 00:09:57.858 ************************************ 00:09:57.858 09:18:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:58.118 * Looking for test storage... 00:09:58.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:58.118 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:58.119 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:58.119 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.119 09:18:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:58.119 09:18:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.119 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:58.119 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:58.119 09:18:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:58.119 09:18:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:06.254 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.254 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:06.255 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:06.255 Found net devices under 0000:31:00.0: cvl_0_0 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:06.255 Found net devices under 0000:31:00.1: cvl_0_1 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:06.255 09:18:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:06.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:10:06.255 00:10:06.255 --- 10.0.0.2 ping statistics --- 00:10:06.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.255 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:10:06.255 00:10:06.255 --- 10.0.0.1 ping statistics --- 00:10:06.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.255 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=534542 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 534542 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 534542 ']' 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.255 09:18:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:06.255 [2024-07-15 09:18:53.176992] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:10:06.255 [2024-07-15 09:18:53.177042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.255 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.255 [2024-07-15 09:18:53.249437] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.255 [2024-07-15 09:18:53.314636] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.255 [2024-07-15 09:18:53.314674] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.255 [2024-07-15 09:18:53.314682] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.255 [2024-07-15 09:18:53.314688] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.255 [2024-07-15 09:18:53.314694] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.255 [2024-07-15 09:18:53.314788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.255 [2024-07-15 09:18:53.314863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.255 [2024-07-15 09:18:53.315003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.255 [2024-07-15 09:18:53.315003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.826 09:18:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:06.826 09:18:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:10:06.826 09:18:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:06.826 09:18:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:06.826 09:18:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:06.826 09:18:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.826 09:18:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:06.826 09:18:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:06.826 09:18:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:07.086 09:18:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:07.086 09:18:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:07.086 "nvmf_tgt_1" 00:10:07.086 09:18:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:07.086 "nvmf_tgt_2" 00:10:07.086 09:18:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:07.086 09:18:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:07.347 09:18:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:07.347 09:18:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:07.347 true 00:10:07.347 09:18:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:07.608 true 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:07.608 rmmod nvme_tcp 00:10:07.608 rmmod nvme_fabrics 00:10:07.608 rmmod nvme_keyring 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 534542 ']' 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 534542 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 534542 ']' 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 534542 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 534542 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 534542' 00:10:07.608 killing process with pid 534542 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 534542 00:10:07.608 09:18:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 534542 00:10:07.870 09:18:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:07.870 09:18:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:07.870 09:18:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:07.870 09:18:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:07.870 09:18:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:07.870 09:18:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.870 09:18:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:07.870 09:18:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.414 09:18:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:10.414 00:10:10.414 real 0m11.980s 00:10:10.414 user 0m9.459s 00:10:10.414 sys 0m6.239s 00:10:10.414 09:18:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:10.414 09:18:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:10.414 ************************************ 00:10:10.414 END TEST nvmf_multitarget 00:10:10.414 ************************************ 00:10:10.414 09:18:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:10.414 09:18:57 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:10.414 09:18:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:10.414 09:18:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.414 09:18:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:10.414 ************************************ 00:10:10.414 START TEST nvmf_rpc 00:10:10.414 ************************************ 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:10.414 * Looking for test storage... 00:10:10.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:10.414 09:18:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:10.415 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:10.415 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.415 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:10.415 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:10.415 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:10.415 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.415 09:18:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:10.415 09:18:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.415 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:10.415 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:10.415 09:18:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:10:10.415 09:18:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.553 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.553 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:10:18.553 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:18.553 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:18.553 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:18.553 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:18.553 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:18.553 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:10:18.553 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:18.553 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:10:18.553 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:10:18.553 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:10:18.553 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:10:18.553 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:10:18.553 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:10:18.553 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:18.554 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:18.554 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:18.554 Found net devices under 0000:31:00.0: cvl_0_0 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:18.554 Found net devices under 0000:31:00.1: cvl_0_1 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:18.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:10:18.554 00:10:18.554 --- 10.0.0.2 ping statistics --- 00:10:18.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.554 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:18.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:10:18.554 00:10:18.554 --- 10.0.0.1 ping statistics --- 00:10:18.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.554 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=539608 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 539608 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 539608 ']' 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:18.554 09:19:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.554 [2024-07-15 09:19:05.454459] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:10:18.554 [2024-07-15 09:19:05.454510] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.554 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.554 [2024-07-15 09:19:05.529465] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.554 [2024-07-15 09:19:05.594953] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.554 [2024-07-15 09:19:05.594986] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.555 [2024-07-15 09:19:05.594993] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.555 [2024-07-15 09:19:05.595000] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.555 [2024-07-15 09:19:05.595005] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.555 [2024-07-15 09:19:05.595144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.555 [2024-07-15 09:19:05.595272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.555 [2024-07-15 09:19:05.595430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.555 [2024-07-15 09:19:05.595431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.128 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:19.128 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:19.128 09:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:19.128 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:19.128 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.128 09:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.128 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:19.128 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.128 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.128 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.128 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:19.128 "tick_rate": 2400000000, 00:10:19.128 "poll_groups": [ 00:10:19.128 { 00:10:19.128 "name": "nvmf_tgt_poll_group_000", 00:10:19.128 "admin_qpairs": 0, 00:10:19.128 "io_qpairs": 0, 00:10:19.128 "current_admin_qpairs": 0, 00:10:19.128 "current_io_qpairs": 0, 00:10:19.128 "pending_bdev_io": 0, 00:10:19.128 "completed_nvme_io": 0, 00:10:19.128 "transports": [] 00:10:19.128 }, 00:10:19.128 { 00:10:19.128 "name": "nvmf_tgt_poll_group_001", 00:10:19.128 "admin_qpairs": 0, 00:10:19.128 "io_qpairs": 0, 00:10:19.128 "current_admin_qpairs": 0, 00:10:19.128 "current_io_qpairs": 0, 00:10:19.128 "pending_bdev_io": 0, 00:10:19.128 "completed_nvme_io": 0, 00:10:19.128 "transports": [] 00:10:19.128 }, 00:10:19.128 { 00:10:19.128 "name": "nvmf_tgt_poll_group_002", 00:10:19.128 "admin_qpairs": 0, 00:10:19.128 "io_qpairs": 0, 00:10:19.128 "current_admin_qpairs": 0, 00:10:19.128 "current_io_qpairs": 0, 00:10:19.128 "pending_bdev_io": 0, 00:10:19.128 "completed_nvme_io": 0, 00:10:19.128 "transports": [] 00:10:19.128 }, 00:10:19.128 { 00:10:19.128 "name": "nvmf_tgt_poll_group_003", 00:10:19.128 "admin_qpairs": 0, 00:10:19.128 "io_qpairs": 0, 00:10:19.128 "current_admin_qpairs": 0, 00:10:19.128 "current_io_qpairs": 0, 00:10:19.128 "pending_bdev_io": 0, 00:10:19.128 "completed_nvme_io": 0, 00:10:19.128 "transports": [] 00:10:19.128 } 00:10:19.128 ] 00:10:19.128 }' 00:10:19.128 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:19.128 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:19.128 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:19.128 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:19.128 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.390 [2024-07-15 09:19:06.380711] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:19.390 "tick_rate": 2400000000, 00:10:19.390 "poll_groups": [ 00:10:19.390 { 00:10:19.390 "name": "nvmf_tgt_poll_group_000", 00:10:19.390 "admin_qpairs": 0, 00:10:19.390 "io_qpairs": 0, 00:10:19.390 "current_admin_qpairs": 0, 00:10:19.390 "current_io_qpairs": 0, 00:10:19.390 "pending_bdev_io": 0, 00:10:19.390 "completed_nvme_io": 0, 00:10:19.390 "transports": [ 00:10:19.390 { 00:10:19.390 "trtype": "TCP" 00:10:19.390 } 00:10:19.390 ] 00:10:19.390 }, 00:10:19.390 { 00:10:19.390 "name": "nvmf_tgt_poll_group_001", 00:10:19.390 "admin_qpairs": 0, 00:10:19.390 "io_qpairs": 0, 00:10:19.390 "current_admin_qpairs": 0, 00:10:19.390 "current_io_qpairs": 0, 00:10:19.390 "pending_bdev_io": 0, 00:10:19.390 "completed_nvme_io": 0, 00:10:19.390 "transports": [ 00:10:19.390 { 00:10:19.390 "trtype": "TCP" 00:10:19.390 } 00:10:19.390 ] 00:10:19.390 }, 00:10:19.390 { 00:10:19.390 "name": "nvmf_tgt_poll_group_002", 00:10:19.390 "admin_qpairs": 0, 00:10:19.390 "io_qpairs": 0, 00:10:19.390 "current_admin_qpairs": 0, 00:10:19.390 "current_io_qpairs": 0, 00:10:19.390 "pending_bdev_io": 0, 00:10:19.390 "completed_nvme_io": 0, 00:10:19.390 "transports": [ 00:10:19.390 { 00:10:19.390 "trtype": "TCP" 00:10:19.390 } 00:10:19.390 ] 00:10:19.390 }, 00:10:19.390 { 00:10:19.390 "name": "nvmf_tgt_poll_group_003", 00:10:19.390 "admin_qpairs": 0, 00:10:19.390 "io_qpairs": 0, 00:10:19.390 "current_admin_qpairs": 0, 00:10:19.390 "current_io_qpairs": 0, 00:10:19.390 "pending_bdev_io": 0, 00:10:19.390 "completed_nvme_io": 0, 00:10:19.390 "transports": [ 00:10:19.390 { 00:10:19.390 "trtype": "TCP" 00:10:19.390 } 00:10:19.390 ] 00:10:19.390 } 00:10:19.390 ] 00:10:19.390 }' 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.390 Malloc1 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.390 [2024-07-15 09:19:06.572371] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:19.390 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:19.391 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:19.391 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:10:19.652 [2024-07-15 09:19:06.599180] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:10:19.652 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:19.652 could not add new controller: failed to write to nvme-fabrics device 00:10:19.652 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:19.652 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:19.652 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:19.652 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:19.652 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:19.652 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.652 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.652 09:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.652 09:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:21.041 09:19:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:21.041 09:19:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:21.041 09:19:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:21.041 09:19:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:21.041 09:19:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:22.955 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:22.955 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:22.955 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:22.955 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:22.955 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:22.955 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:22.955 09:19:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:23.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.216 [2024-07-15 09:19:10.247124] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:10:23.216 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:23.216 could not add new controller: failed to write to nvme-fabrics device 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.216 09:19:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:24.597 09:19:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:24.597 09:19:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:24.597 09:19:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:24.597 09:19:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:24.597 09:19:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:26.509 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:26.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.770 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.771 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.771 [2024-07-15 09:19:13.899213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.771 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.771 09:19:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:26.771 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.771 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.771 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.771 09:19:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:26.771 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.771 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.771 09:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.771 09:19:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:28.273 09:19:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:28.273 09:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:28.273 09:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:28.273 09:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:28.273 09:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:30.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.817 [2024-07-15 09:19:17.594638] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.817 09:19:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:32.203 09:19:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:32.203 09:19:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:32.203 09:19:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:32.203 09:19:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:32.203 09:19:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:34.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.117 [2024-07-15 09:19:21.259359] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.117 09:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:36.028 09:19:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:36.028 09:19:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:36.028 09:19:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:36.028 09:19:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:36.029 09:19:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:37.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.939 [2024-07-15 09:19:24.956083] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.939 09:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:39.323 09:19:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:39.323 09:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:39.323 09:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:39.323 09:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:39.324 09:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:41.238 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:41.238 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:41.238 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:41.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.499 [2024-07-15 09:19:28.615999] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.499 09:19:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:43.412 09:19:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:43.412 09:19:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:43.412 09:19:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:43.412 09:19:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:43.412 09:19:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.327 [2024-07-15 09:19:32.368608] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.327 [2024-07-15 09:19:32.428730] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.327 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.328 [2024-07-15 09:19:32.488919] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.328 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.589 [2024-07-15 09:19:32.545087] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.589 [2024-07-15 09:19:32.605279] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.589 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:45.590 "tick_rate": 2400000000, 00:10:45.590 "poll_groups": [ 00:10:45.590 { 00:10:45.590 "name": "nvmf_tgt_poll_group_000", 00:10:45.590 "admin_qpairs": 0, 00:10:45.590 "io_qpairs": 224, 00:10:45.590 "current_admin_qpairs": 0, 00:10:45.590 "current_io_qpairs": 0, 00:10:45.590 "pending_bdev_io": 0, 00:10:45.590 "completed_nvme_io": 225, 00:10:45.590 "transports": [ 00:10:45.590 { 00:10:45.590 "trtype": "TCP" 00:10:45.590 } 00:10:45.590 ] 00:10:45.590 }, 00:10:45.590 { 00:10:45.590 "name": "nvmf_tgt_poll_group_001", 00:10:45.590 "admin_qpairs": 1, 00:10:45.590 "io_qpairs": 223, 00:10:45.590 "current_admin_qpairs": 0, 00:10:45.590 "current_io_qpairs": 0, 00:10:45.590 "pending_bdev_io": 0, 00:10:45.590 "completed_nvme_io": 273, 00:10:45.590 "transports": [ 00:10:45.590 { 00:10:45.590 "trtype": "TCP" 00:10:45.590 } 00:10:45.590 ] 00:10:45.590 }, 00:10:45.590 { 00:10:45.590 "name": "nvmf_tgt_poll_group_002", 00:10:45.590 "admin_qpairs": 6, 00:10:45.590 "io_qpairs": 218, 00:10:45.590 "current_admin_qpairs": 0, 00:10:45.590 "current_io_qpairs": 0, 00:10:45.590 "pending_bdev_io": 0, 00:10:45.590 "completed_nvme_io": 467, 00:10:45.590 "transports": [ 00:10:45.590 { 00:10:45.590 "trtype": "TCP" 00:10:45.590 } 00:10:45.590 ] 00:10:45.590 }, 00:10:45.590 { 00:10:45.590 "name": "nvmf_tgt_poll_group_003", 00:10:45.590 "admin_qpairs": 0, 00:10:45.590 "io_qpairs": 224, 00:10:45.590 "current_admin_qpairs": 0, 00:10:45.590 "current_io_qpairs": 0, 00:10:45.590 "pending_bdev_io": 0, 00:10:45.590 "completed_nvme_io": 274, 00:10:45.590 "transports": [ 00:10:45.590 { 00:10:45.590 "trtype": "TCP" 00:10:45.590 } 00:10:45.590 ] 00:10:45.590 } 00:10:45.590 ] 00:10:45.590 }' 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:45.590 09:19:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:45.590 rmmod nvme_tcp 00:10:45.851 rmmod nvme_fabrics 00:10:45.851 rmmod nvme_keyring 00:10:45.851 09:19:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:45.851 09:19:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:45.851 09:19:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:45.851 09:19:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 539608 ']' 00:10:45.851 09:19:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 539608 00:10:45.851 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 539608 ']' 00:10:45.851 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 539608 00:10:45.851 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:10:45.851 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:45.851 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 539608 00:10:45.851 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:45.851 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:45.851 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 539608' 00:10:45.851 killing process with pid 539608 00:10:45.851 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 539608 00:10:45.851 09:19:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 539608 00:10:45.851 09:19:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:45.851 09:19:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:45.851 09:19:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:45.851 09:19:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:45.851 09:19:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:45.851 09:19:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.851 09:19:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:45.851 09:19:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.392 09:19:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:48.392 00:10:48.392 real 0m38.031s 00:10:48.392 user 1m51.959s 00:10:48.392 sys 0m7.624s 00:10:48.392 09:19:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:48.392 09:19:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.392 ************************************ 00:10:48.392 END TEST nvmf_rpc 00:10:48.392 ************************************ 00:10:48.392 09:19:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:48.392 09:19:35 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:48.392 09:19:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:48.392 09:19:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.392 09:19:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:48.392 ************************************ 00:10:48.392 START TEST nvmf_invalid 00:10:48.392 ************************************ 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:48.392 * Looking for test storage... 00:10:48.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.392 09:19:35 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:48.393 09:19:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.579 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:56.580 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:56.580 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:56.580 Found net devices under 0000:31:00.0: cvl_0_0 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:56.580 Found net devices under 0000:31:00.1: cvl_0_1 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:56.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.735 ms 00:10:56.580 00:10:56.580 --- 10.0.0.2 ping statistics --- 00:10:56.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.580 rtt min/avg/max/mdev = 0.735/0.735/0.735/0.000 ms 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:10:56.580 00:10:56.580 --- 10.0.0.1 ping statistics --- 00:10:56.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.580 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=549820 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 549820 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 549820 ']' 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:56.580 09:19:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:56.580 [2024-07-15 09:19:43.586719] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:10:56.580 [2024-07-15 09:19:43.586789] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.580 EAL: No free 2048 kB hugepages reported on node 1 00:10:56.580 [2024-07-15 09:19:43.664402] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.580 [2024-07-15 09:19:43.739934] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.580 [2024-07-15 09:19:43.739971] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.580 [2024-07-15 09:19:43.739979] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.580 [2024-07-15 09:19:43.739985] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.580 [2024-07-15 09:19:43.739990] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.580 [2024-07-15 09:19:43.740180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.580 [2024-07-15 09:19:43.740297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.580 [2024-07-15 09:19:43.740459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.580 [2024-07-15 09:19:43.740460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.152 09:19:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:57.152 09:19:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:10:57.152 09:19:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:57.152 09:19:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:57.152 09:19:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:57.413 09:19:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.413 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:57.413 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5863 00:10:57.413 [2024-07-15 09:19:44.531615] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:57.413 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:57.413 { 00:10:57.413 "nqn": "nqn.2016-06.io.spdk:cnode5863", 00:10:57.413 "tgt_name": "foobar", 00:10:57.413 "method": "nvmf_create_subsystem", 00:10:57.413 "req_id": 1 00:10:57.413 } 00:10:57.413 Got JSON-RPC error response 00:10:57.413 response: 00:10:57.413 { 00:10:57.413 "code": -32603, 00:10:57.413 "message": "Unable to find target foobar" 00:10:57.413 }' 00:10:57.413 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:57.413 { 00:10:57.413 "nqn": "nqn.2016-06.io.spdk:cnode5863", 00:10:57.413 "tgt_name": "foobar", 00:10:57.413 "method": "nvmf_create_subsystem", 00:10:57.413 "req_id": 1 00:10:57.413 } 00:10:57.413 Got JSON-RPC error response 00:10:57.413 response: 00:10:57.413 { 00:10:57.413 "code": -32603, 00:10:57.413 "message": "Unable to find target foobar" 00:10:57.413 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:57.413 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:57.413 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25016 00:10:57.673 [2024-07-15 09:19:44.708191] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25016: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:57.673 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:57.673 { 00:10:57.673 "nqn": "nqn.2016-06.io.spdk:cnode25016", 00:10:57.673 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:57.673 "method": "nvmf_create_subsystem", 00:10:57.673 "req_id": 1 00:10:57.673 } 00:10:57.673 Got JSON-RPC error response 00:10:57.673 response: 00:10:57.673 { 00:10:57.673 "code": -32602, 00:10:57.673 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:57.673 }' 00:10:57.673 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:57.673 { 00:10:57.673 "nqn": "nqn.2016-06.io.spdk:cnode25016", 00:10:57.673 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:57.673 "method": "nvmf_create_subsystem", 00:10:57.673 "req_id": 1 00:10:57.673 } 00:10:57.673 Got JSON-RPC error response 00:10:57.673 response: 00:10:57.673 { 00:10:57.673 "code": -32602, 00:10:57.673 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:57.673 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:57.673 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:57.673 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode3253 00:10:57.935 [2024-07-15 09:19:44.880745] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3253: invalid model number 'SPDK_Controller' 00:10:57.935 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:57.935 { 00:10:57.935 "nqn": "nqn.2016-06.io.spdk:cnode3253", 00:10:57.935 "model_number": "SPDK_Controller\u001f", 00:10:57.935 "method": "nvmf_create_subsystem", 00:10:57.935 "req_id": 1 00:10:57.935 } 00:10:57.935 Got JSON-RPC error response 00:10:57.935 response: 00:10:57.935 { 00:10:57.935 "code": -32602, 00:10:57.935 "message": "Invalid MN SPDK_Controller\u001f" 00:10:57.935 }' 00:10:57.935 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:57.935 { 00:10:57.935 "nqn": "nqn.2016-06.io.spdk:cnode3253", 00:10:57.935 "model_number": "SPDK_Controller\u001f", 00:10:57.935 "method": "nvmf_create_subsystem", 00:10:57.935 "req_id": 1 00:10:57.935 } 00:10:57.935 Got JSON-RPC error response 00:10:57.935 response: 00:10:57.935 { 00:10:57.935 "code": -32602, 00:10:57.935 "message": "Invalid MN SPDK_Controller\u001f" 00:10:57.935 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:57.935 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:57.935 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:57.935 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:57.935 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:57.935 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:57.935 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:57.935 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ r == \- ]] 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'rF:`ZJepb>G73"B9_ Vjr' 00:10:57.936 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'rF:`ZJepb>G73"B9_ Vjr' nqn.2016-06.io.spdk:cnode12011 00:10:58.198 [2024-07-15 09:19:45.217765] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12011: invalid serial number 'rF:`ZJepb>G73"B9_ Vjr' 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:58.198 { 00:10:58.198 "nqn": "nqn.2016-06.io.spdk:cnode12011", 00:10:58.198 "serial_number": "rF:`ZJepb>G73\"B9_ Vjr", 00:10:58.198 "method": "nvmf_create_subsystem", 00:10:58.198 "req_id": 1 00:10:58.198 } 00:10:58.198 Got JSON-RPC error response 00:10:58.198 response: 00:10:58.198 { 00:10:58.198 "code": -32602, 00:10:58.198 "message": "Invalid SN rF:`ZJepb>G73\"B9_ Vjr" 00:10:58.198 }' 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:58.198 { 00:10:58.198 "nqn": "nqn.2016-06.io.spdk:cnode12011", 00:10:58.198 "serial_number": "rF:`ZJepb>G73\"B9_ Vjr", 00:10:58.198 "method": "nvmf_create_subsystem", 00:10:58.198 "req_id": 1 00:10:58.198 } 00:10:58.198 Got JSON-RPC error response 00:10:58.198 response: 00:10:58.198 { 00:10:58.198 "code": -32602, 00:10:58.198 "message": "Invalid SN rF:`ZJepb>G73\"B9_ Vjr" 00:10:58.198 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:58.198 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.199 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ _ == \- ]] 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '_AK~K)z%>:f@O"cqBu;:7@rJE'\''LNv{;:0uI^vM`B]' 00:10:58.461 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '_AK~K)z%>:f@O"cqBu;:7@rJE'\''LNv{;:0uI^vM`B]' nqn.2016-06.io.spdk:cnode5004 00:10:58.723 [2024-07-15 09:19:45.699308] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5004: invalid model number '_AK~K)z%>:f@O"cqBu;:7@rJE'LNv{;:0uI^vM`B]' 00:10:58.723 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:58.723 { 00:10:58.723 "nqn": "nqn.2016-06.io.spdk:cnode5004", 00:10:58.723 "model_number": "_AK~K)z%>:f@O\"cqBu;:7@rJE'\''LNv{;:0uI^vM`B]", 00:10:58.723 "method": "nvmf_create_subsystem", 00:10:58.723 "req_id": 1 00:10:58.723 } 00:10:58.723 Got JSON-RPC error response 00:10:58.723 response: 00:10:58.723 { 00:10:58.723 "code": -32602, 00:10:58.723 "message": "Invalid MN _AK~K)z%>:f@O\"cqBu;:7@rJE'\''LNv{;:0uI^vM`B]" 00:10:58.723 }' 00:10:58.723 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:58.723 { 00:10:58.723 "nqn": "nqn.2016-06.io.spdk:cnode5004", 00:10:58.723 "model_number": "_AK~K)z%>:f@O\"cqBu;:7@rJE'LNv{;:0uI^vM`B]", 00:10:58.723 "method": "nvmf_create_subsystem", 00:10:58.723 "req_id": 1 00:10:58.723 } 00:10:58.723 Got JSON-RPC error response 00:10:58.723 response: 00:10:58.723 { 00:10:58.723 "code": -32602, 00:10:58.723 "message": "Invalid MN _AK~K)z%>:f@O\"cqBu;:7@rJE'LNv{;:0uI^vM`B]" 00:10:58.723 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:58.723 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:58.723 [2024-07-15 09:19:45.867929] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.723 09:19:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:58.984 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:58.984 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:58.984 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:58.984 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:58.984 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:59.245 [2024-07-15 09:19:46.210404] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:59.245 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:59.245 { 00:10:59.245 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:59.245 "listen_address": { 00:10:59.245 "trtype": "tcp", 00:10:59.245 "traddr": "", 00:10:59.245 "trsvcid": "4421" 00:10:59.245 }, 00:10:59.245 "method": "nvmf_subsystem_remove_listener", 00:10:59.245 "req_id": 1 00:10:59.245 } 00:10:59.245 Got JSON-RPC error response 00:10:59.245 response: 00:10:59.245 { 00:10:59.245 "code": -32602, 00:10:59.245 "message": "Invalid parameters" 00:10:59.245 }' 00:10:59.245 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:59.245 { 00:10:59.245 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:59.245 "listen_address": { 00:10:59.245 "trtype": "tcp", 00:10:59.245 "traddr": "", 00:10:59.245 "trsvcid": "4421" 00:10:59.245 }, 00:10:59.245 "method": "nvmf_subsystem_remove_listener", 00:10:59.245 "req_id": 1 00:10:59.245 } 00:10:59.245 Got JSON-RPC error response 00:10:59.245 response: 00:10:59.245 { 00:10:59.245 "code": -32602, 00:10:59.245 "message": "Invalid parameters" 00:10:59.245 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:59.245 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17846 -i 0 00:10:59.245 [2024-07-15 09:19:46.382901] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17846: invalid cntlid range [0-65519] 00:10:59.245 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:59.245 { 00:10:59.245 "nqn": "nqn.2016-06.io.spdk:cnode17846", 00:10:59.245 "min_cntlid": 0, 00:10:59.245 "method": "nvmf_create_subsystem", 00:10:59.245 "req_id": 1 00:10:59.245 } 00:10:59.245 Got JSON-RPC error response 00:10:59.245 response: 00:10:59.245 { 00:10:59.245 "code": -32602, 00:10:59.245 "message": "Invalid cntlid range [0-65519]" 00:10:59.245 }' 00:10:59.245 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:59.245 { 00:10:59.245 "nqn": "nqn.2016-06.io.spdk:cnode17846", 00:10:59.245 "min_cntlid": 0, 00:10:59.245 "method": "nvmf_create_subsystem", 00:10:59.245 "req_id": 1 00:10:59.245 } 00:10:59.245 Got JSON-RPC error response 00:10:59.245 response: 00:10:59.245 { 00:10:59.245 "code": -32602, 00:10:59.245 "message": "Invalid cntlid range [0-65519]" 00:10:59.245 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:59.245 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21739 -i 65520 00:10:59.507 [2024-07-15 09:19:46.555442] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21739: invalid cntlid range [65520-65519] 00:10:59.507 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:59.507 { 00:10:59.507 "nqn": "nqn.2016-06.io.spdk:cnode21739", 00:10:59.507 "min_cntlid": 65520, 00:10:59.507 "method": "nvmf_create_subsystem", 00:10:59.507 "req_id": 1 00:10:59.507 } 00:10:59.507 Got JSON-RPC error response 00:10:59.507 response: 00:10:59.507 { 00:10:59.507 "code": -32602, 00:10:59.507 "message": "Invalid cntlid range [65520-65519]" 00:10:59.507 }' 00:10:59.507 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:59.507 { 00:10:59.507 "nqn": "nqn.2016-06.io.spdk:cnode21739", 00:10:59.507 "min_cntlid": 65520, 00:10:59.507 "method": "nvmf_create_subsystem", 00:10:59.507 "req_id": 1 00:10:59.507 } 00:10:59.507 Got JSON-RPC error response 00:10:59.507 response: 00:10:59.507 { 00:10:59.507 "code": -32602, 00:10:59.507 "message": "Invalid cntlid range [65520-65519]" 00:10:59.507 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:59.507 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30373 -I 0 00:10:59.767 [2024-07-15 09:19:46.727993] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30373: invalid cntlid range [1-0] 00:10:59.767 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:59.767 { 00:10:59.767 "nqn": "nqn.2016-06.io.spdk:cnode30373", 00:10:59.767 "max_cntlid": 0, 00:10:59.767 "method": "nvmf_create_subsystem", 00:10:59.767 "req_id": 1 00:10:59.767 } 00:10:59.767 Got JSON-RPC error response 00:10:59.767 response: 00:10:59.767 { 00:10:59.767 "code": -32602, 00:10:59.767 "message": "Invalid cntlid range [1-0]" 00:10:59.767 }' 00:10:59.767 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:59.767 { 00:10:59.767 "nqn": "nqn.2016-06.io.spdk:cnode30373", 00:10:59.767 "max_cntlid": 0, 00:10:59.767 "method": "nvmf_create_subsystem", 00:10:59.767 "req_id": 1 00:10:59.767 } 00:10:59.767 Got JSON-RPC error response 00:10:59.767 response: 00:10:59.767 { 00:10:59.767 "code": -32602, 00:10:59.767 "message": "Invalid cntlid range [1-0]" 00:10:59.767 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:59.767 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12748 -I 65520 00:10:59.767 [2024-07-15 09:19:46.900577] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12748: invalid cntlid range [1-65520] 00:10:59.767 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:59.767 { 00:10:59.767 "nqn": "nqn.2016-06.io.spdk:cnode12748", 00:10:59.767 "max_cntlid": 65520, 00:10:59.767 "method": "nvmf_create_subsystem", 00:10:59.767 "req_id": 1 00:10:59.767 } 00:10:59.767 Got JSON-RPC error response 00:10:59.767 response: 00:10:59.767 { 00:10:59.767 "code": -32602, 00:10:59.767 "message": "Invalid cntlid range [1-65520]" 00:10:59.767 }' 00:10:59.767 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:59.767 { 00:10:59.767 "nqn": "nqn.2016-06.io.spdk:cnode12748", 00:10:59.767 "max_cntlid": 65520, 00:10:59.767 "method": "nvmf_create_subsystem", 00:10:59.767 "req_id": 1 00:10:59.767 } 00:10:59.767 Got JSON-RPC error response 00:10:59.767 response: 00:10:59.767 { 00:10:59.767 "code": -32602, 00:10:59.767 "message": "Invalid cntlid range [1-65520]" 00:10:59.767 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:59.767 09:19:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16290 -i 6 -I 5 00:11:00.027 [2024-07-15 09:19:47.065068] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16290: invalid cntlid range [6-5] 00:11:00.027 09:19:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:00.027 { 00:11:00.027 "nqn": "nqn.2016-06.io.spdk:cnode16290", 00:11:00.027 "min_cntlid": 6, 00:11:00.027 "max_cntlid": 5, 00:11:00.027 "method": "nvmf_create_subsystem", 00:11:00.027 "req_id": 1 00:11:00.027 } 00:11:00.027 Got JSON-RPC error response 00:11:00.027 response: 00:11:00.027 { 00:11:00.027 "code": -32602, 00:11:00.027 "message": "Invalid cntlid range [6-5]" 00:11:00.027 }' 00:11:00.027 09:19:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:00.027 { 00:11:00.027 "nqn": "nqn.2016-06.io.spdk:cnode16290", 00:11:00.027 "min_cntlid": 6, 00:11:00.027 "max_cntlid": 5, 00:11:00.027 "method": "nvmf_create_subsystem", 00:11:00.027 "req_id": 1 00:11:00.027 } 00:11:00.027 Got JSON-RPC error response 00:11:00.027 response: 00:11:00.027 { 00:11:00.027 "code": -32602, 00:11:00.027 "message": "Invalid cntlid range [6-5]" 00:11:00.027 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:00.027 09:19:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:00.027 09:19:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:00.027 { 00:11:00.027 "name": "foobar", 00:11:00.027 "method": "nvmf_delete_target", 00:11:00.027 "req_id": 1 00:11:00.027 } 00:11:00.027 Got JSON-RPC error response 00:11:00.027 response: 00:11:00.027 { 00:11:00.027 "code": -32602, 00:11:00.027 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:00.027 }' 00:11:00.027 09:19:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:00.027 { 00:11:00.027 "name": "foobar", 00:11:00.027 "method": "nvmf_delete_target", 00:11:00.027 "req_id": 1 00:11:00.027 } 00:11:00.027 Got JSON-RPC error response 00:11:00.027 response: 00:11:00.027 { 00:11:00.027 "code": -32602, 00:11:00.027 "message": "The specified target doesn't exist, cannot delete it." 00:11:00.027 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:00.027 09:19:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:00.027 09:19:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:00.027 09:19:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:00.027 09:19:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:11:00.027 09:19:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:00.027 09:19:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:11:00.027 09:19:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:00.027 09:19:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:00.027 rmmod nvme_tcp 00:11:00.027 rmmod nvme_fabrics 00:11:00.288 rmmod nvme_keyring 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 549820 ']' 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 549820 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 549820 ']' 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 549820 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 549820 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 549820' 00:11:00.288 killing process with pid 549820 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 549820 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 549820 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:00.288 09:19:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.835 09:19:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:02.835 00:11:02.835 real 0m14.348s 00:11:02.835 user 0m19.317s 00:11:02.835 sys 0m6.990s 00:11:02.835 09:19:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:02.835 09:19:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:02.835 ************************************ 00:11:02.835 END TEST nvmf_invalid 00:11:02.835 ************************************ 00:11:02.835 09:19:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:02.835 09:19:49 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:02.835 09:19:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:02.835 09:19:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.835 09:19:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:02.835 ************************************ 00:11:02.835 START TEST nvmf_abort 00:11:02.835 ************************************ 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:02.835 * Looking for test storage... 00:11:02.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:11:02.835 09:19:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:10.981 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:10.981 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:10.981 Found net devices under 0000:31:00.0: cvl_0_0 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:10.981 Found net devices under 0000:31:00.1: cvl_0_1 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:10.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:11:10.981 00:11:10.981 --- 10.0.0.2 ping statistics --- 00:11:10.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.981 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:10.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:11:10.981 00:11:10.981 --- 10.0.0.1 ping statistics --- 00:11:10.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.981 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:11:10.981 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:10.984 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.984 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:10.984 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:10.984 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.984 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:10.984 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:10.984 09:19:57 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:10.984 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:10.984 09:19:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:10.984 09:19:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:10.984 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=555353 00:11:10.984 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 555353 00:11:10.984 09:19:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:10.984 09:19:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 555353 ']' 00:11:10.984 09:19:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.984 09:19:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:10.984 09:19:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.984 09:19:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:10.984 09:19:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:10.984 [2024-07-15 09:19:57.831617] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:11:10.984 [2024-07-15 09:19:57.831681] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.984 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.984 [2024-07-15 09:19:57.925491] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:10.984 [2024-07-15 09:19:58.019438] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.984 [2024-07-15 09:19:58.019496] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.984 [2024-07-15 09:19:58.019504] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.984 [2024-07-15 09:19:58.019511] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.984 [2024-07-15 09:19:58.019517] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.984 [2024-07-15 09:19:58.019656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.984 [2024-07-15 09:19:58.019825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.984 [2024-07-15 09:19:58.019856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:11.554 [2024-07-15 09:19:58.661446] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:11.554 Malloc0 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:11.554 Delay0 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:11.554 [2024-07-15 09:19:58.742056] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.554 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:11.813 09:19:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.813 09:19:58 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:11.813 EAL: No free 2048 kB hugepages reported on node 1 00:11:11.813 [2024-07-15 09:19:58.820534] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:13.727 Initializing NVMe Controllers 00:11:13.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:13.727 controller IO queue size 128 less than required 00:11:13.727 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:13.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:13.727 Initialization complete. Launching workers. 00:11:13.727 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 34560 00:11:13.727 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34625, failed to submit 62 00:11:13.727 success 34564, unsuccess 61, failed 0 00:11:13.727 09:20:00 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:13.727 09:20:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.727 09:20:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:13.727 09:20:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.727 09:20:00 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:13.727 09:20:00 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:11:13.727 09:20:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:13.727 09:20:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:11:13.727 09:20:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:13.727 09:20:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:11:13.727 09:20:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:13.727 09:20:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:13.727 rmmod nvme_tcp 00:11:13.727 rmmod nvme_fabrics 00:11:13.727 rmmod nvme_keyring 00:11:13.727 09:20:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:13.989 09:20:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:11:13.989 09:20:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:11:13.989 09:20:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 555353 ']' 00:11:13.989 09:20:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 555353 00:11:13.989 09:20:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 555353 ']' 00:11:13.989 09:20:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 555353 00:11:13.989 09:20:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:11:13.989 09:20:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:13.989 09:20:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 555353 00:11:13.989 09:20:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:13.989 09:20:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:13.989 09:20:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 555353' 00:11:13.989 killing process with pid 555353 00:11:13.989 09:20:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 555353 00:11:13.989 09:20:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 555353 00:11:13.989 09:20:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:13.989 09:20:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:13.989 09:20:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:13.989 09:20:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:13.989 09:20:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:13.989 09:20:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.989 09:20:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:13.989 09:20:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.535 09:20:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:16.535 00:11:16.535 real 0m13.555s 00:11:16.535 user 0m13.283s 00:11:16.535 sys 0m6.796s 00:11:16.535 09:20:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:16.535 09:20:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:16.535 ************************************ 00:11:16.535 END TEST nvmf_abort 00:11:16.535 ************************************ 00:11:16.535 09:20:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:16.535 09:20:03 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:16.535 09:20:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:16.535 09:20:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:16.535 09:20:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:16.535 ************************************ 00:11:16.535 START TEST nvmf_ns_hotplug_stress 00:11:16.535 ************************************ 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:16.535 * Looking for test storage... 00:11:16.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:16.535 09:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:24.677 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:24.677 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:24.677 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:24.678 Found net devices under 0000:31:00.0: cvl_0_0 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:24.678 Found net devices under 0000:31:00.1: cvl_0_1 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:24.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:11:24.678 00:11:24.678 --- 10.0.0.2 ping statistics --- 00:11:24.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.678 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:11:24.678 00:11:24.678 --- 10.0.0.1 ping statistics --- 00:11:24.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.678 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=560719 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 560719 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 560719 ']' 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:24.678 09:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.678 [2024-07-15 09:20:11.586832] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:11:24.678 [2024-07-15 09:20:11.586884] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.678 EAL: No free 2048 kB hugepages reported on node 1 00:11:24.678 [2024-07-15 09:20:11.675141] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:24.678 [2024-07-15 09:20:11.748863] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.678 [2024-07-15 09:20:11.748911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.678 [2024-07-15 09:20:11.748919] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.678 [2024-07-15 09:20:11.748926] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.678 [2024-07-15 09:20:11.748932] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.678 [2024-07-15 09:20:11.749046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.678 [2024-07-15 09:20:11.749358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.678 [2024-07-15 09:20:11.749358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.250 09:20:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:25.250 09:20:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:11:25.250 09:20:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:25.250 09:20:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:25.250 09:20:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.250 09:20:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.250 09:20:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:25.250 09:20:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:25.510 [2024-07-15 09:20:12.536723] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.510 09:20:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:25.771 09:20:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.771 [2024-07-15 09:20:12.874216] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.771 09:20:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:26.033 09:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:26.033 Malloc0 00:11:26.292 09:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:26.292 Delay0 00:11:26.293 09:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.552 09:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:26.552 NULL1 00:11:26.552 09:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:26.812 09:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=561113 00:11:26.812 09:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:26.812 09:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:26.812 09:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.812 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.072 09:20:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.072 09:20:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:27.072 09:20:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:27.333 [2024-07-15 09:20:14.398547] bdev.c:5033:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:11:27.333 true 00:11:27.333 09:20:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:27.333 09:20:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.594 09:20:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.594 09:20:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:27.594 09:20:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:27.855 true 00:11:27.855 09:20:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:27.855 09:20:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.131 09:20:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.131 09:20:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:28.131 09:20:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:28.432 true 00:11:28.432 09:20:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:28.432 09:20:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.432 09:20:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.697 09:20:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:28.697 09:20:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:28.958 true 00:11:28.958 09:20:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:28.958 09:20:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.958 09:20:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.243 09:20:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:29.243 09:20:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:29.243 true 00:11:29.504 09:20:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:29.504 09:20:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.504 09:20:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.765 09:20:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:29.765 09:20:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:29.765 true 00:11:29.765 09:20:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:29.765 09:20:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.027 09:20:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.288 09:20:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:30.288 09:20:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:30.288 true 00:11:30.288 09:20:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:30.288 09:20:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.550 09:20:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.812 09:20:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:30.812 09:20:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:30.812 true 00:11:30.812 09:20:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:30.812 09:20:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.072 09:20:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.334 09:20:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:31.334 09:20:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:31.334 true 00:11:31.334 09:20:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:31.334 09:20:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.594 09:20:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.857 09:20:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:31.857 09:20:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:31.857 true 00:11:31.857 09:20:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:31.857 09:20:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.118 09:20:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.118 09:20:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:32.118 09:20:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:32.378 true 00:11:32.378 09:20:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:32.379 09:20:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.639 09:20:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.639 09:20:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:32.639 09:20:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:32.899 true 00:11:32.899 09:20:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:32.899 09:20:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.160 09:20:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.160 09:20:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:33.160 09:20:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:33.421 true 00:11:33.421 09:20:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:33.421 09:20:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.682 09:20:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.682 09:20:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:33.682 09:20:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:33.943 true 00:11:33.944 09:20:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:33.944 09:20:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.204 09:20:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.204 09:20:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:34.204 09:20:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:34.465 true 00:11:34.465 09:20:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:34.466 09:20:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.726 09:20:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.726 09:20:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:34.726 09:20:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:34.988 true 00:11:34.988 09:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:34.988 09:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.988 09:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.249 09:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:35.249 09:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:35.509 true 00:11:35.509 09:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:35.509 09:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.509 09:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.770 09:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:35.770 09:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:36.029 true 00:11:36.029 09:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:36.029 09:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.029 09:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.288 09:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:36.288 09:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:36.288 true 00:11:36.548 09:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:36.548 09:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.548 09:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.809 09:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:36.809 09:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:36.809 true 00:11:36.809 09:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:36.809 09:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.069 09:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.328 09:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:37.328 09:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:37.328 true 00:11:37.328 09:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:37.328 09:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.587 09:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.847 09:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:37.847 09:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:37.847 true 00:11:37.847 09:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:37.847 09:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.108 09:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.367 09:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:38.367 09:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:38.367 true 00:11:38.367 09:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:38.367 09:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.627 09:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.627 09:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:38.627 09:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:38.887 true 00:11:38.887 09:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:38.887 09:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.147 09:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:39.147 09:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:39.147 09:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:39.406 true 00:11:39.406 09:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:39.406 09:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.667 09:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:39.667 09:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:39.667 09:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:39.927 true 00:11:39.927 09:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:39.927 09:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.187 09:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.187 09:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:40.187 09:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:40.447 true 00:11:40.447 09:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:40.447 09:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.448 09:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.708 09:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:40.708 09:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:40.967 true 00:11:40.967 09:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:40.967 09:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.967 09:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.228 09:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:41.228 09:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:41.488 true 00:11:41.488 09:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:41.488 09:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.488 09:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.750 09:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:41.750 09:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:41.750 true 00:11:42.010 09:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:42.010 09:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.010 09:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.272 09:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:11:42.272 09:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:11:42.272 true 00:11:42.272 09:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:42.272 09:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.532 09:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.793 09:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:11:42.793 09:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:11:42.793 true 00:11:42.793 09:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:42.793 09:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.052 09:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.312 09:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:11:43.313 09:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:11:43.313 true 00:11:43.313 09:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:43.313 09:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.573 09:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.833 09:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:11:43.833 09:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:11:43.833 true 00:11:43.833 09:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:43.833 09:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.094 09:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:44.354 09:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:11:44.354 09:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:11:44.354 true 00:11:44.354 09:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:44.354 09:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.614 09:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:44.614 09:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:11:44.614 09:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:11:44.874 true 00:11:44.874 09:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:44.874 09:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.135 09:20:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:45.135 09:20:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:11:45.135 09:20:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:11:45.395 true 00:11:45.395 09:20:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:45.395 09:20:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.656 09:20:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:45.656 09:20:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:11:45.656 09:20:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:11:45.916 true 00:11:45.916 09:20:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:45.916 09:20:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.185 09:20:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.185 09:20:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:11:46.185 09:20:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:11:46.450 true 00:11:46.450 09:20:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:46.450 09:20:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.711 09:20:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.711 09:20:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:11:46.711 09:20:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:11:46.972 true 00:11:46.972 09:20:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:46.972 09:20:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.972 09:20:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.232 09:20:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:11:47.232 09:20:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:11:47.493 true 00:11:47.493 09:20:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:47.493 09:20:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.493 09:20:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.754 09:20:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:11:47.754 09:20:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:11:48.014 true 00:11:48.014 09:20:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:48.014 09:20:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.014 09:20:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:48.275 09:20:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:11:48.275 09:20:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:11:48.536 true 00:11:48.536 09:20:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:48.536 09:20:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.536 09:20:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:48.795 09:20:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:11:48.795 09:20:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:11:48.795 true 00:11:49.055 09:20:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:49.055 09:20:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.055 09:20:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:49.316 09:20:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:11:49.316 09:20:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:11:49.316 true 00:11:49.316 09:20:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:49.316 09:20:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.621 09:20:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:49.885 09:20:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:11:49.885 09:20:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:11:49.885 true 00:11:49.885 09:20:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:49.885 09:20:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.145 09:20:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:50.407 09:20:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:11:50.407 09:20:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:11:50.407 true 00:11:50.407 09:20:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:50.407 09:20:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.668 09:20:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:50.668 09:20:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:11:50.668 09:20:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:11:50.929 true 00:11:50.929 09:20:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:50.929 09:20:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.190 09:20:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:51.190 09:20:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:11:51.190 09:20:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:11:51.450 true 00:11:51.450 09:20:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:51.450 09:20:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.711 09:20:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:51.711 09:20:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:11:51.711 09:20:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:11:51.972 true 00:11:51.973 09:20:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:51.973 09:20:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.233 09:20:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.233 09:20:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:11:52.233 09:20:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:11:52.495 true 00:11:52.495 09:20:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:52.495 09:20:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.756 09:20:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.756 09:20:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:11:52.756 09:20:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:11:53.018 true 00:11:53.018 09:20:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:53.018 09:20:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.018 09:20:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:53.278 09:20:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:11:53.279 09:20:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:11:53.539 true 00:11:53.539 09:20:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:53.539 09:20:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.539 09:20:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:53.800 09:20:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:11:53.800 09:20:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:11:54.067 true 00:11:54.067 09:20:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:54.067 09:20:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.067 09:20:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:54.329 09:20:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:11:54.329 09:20:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:11:54.589 true 00:11:54.589 09:20:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:54.589 09:20:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.589 09:20:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:54.850 09:20:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:11:54.850 09:20:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:11:54.850 true 00:11:55.111 09:20:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:55.111 09:20:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.111 09:20:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:55.373 09:20:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:11:55.373 09:20:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:11:55.373 true 00:11:55.373 09:20:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:55.373 09:20:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.634 09:20:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:55.895 09:20:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:11:55.895 09:20:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:11:55.895 true 00:11:55.895 09:20:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:55.895 09:20:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.157 09:20:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:56.418 09:20:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:11:56.418 09:20:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:11:56.418 true 00:11:56.418 09:20:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:56.418 09:20:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.679 09:20:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:56.940 09:20:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:11:56.940 09:20:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:11:56.940 true 00:11:56.940 09:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:56.940 09:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.940 Initializing NVMe Controllers 00:11:56.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:56.940 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:11:56.940 Controller IO queue size 128, less than required. 00:11:56.940 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:56.940 WARNING: Some requested NVMe devices were skipped 00:11:56.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:56.940 Initialization complete. Launching workers. 00:11:56.940 ======================================================== 00:11:56.940 Latency(us) 00:11:56.940 Device Information : IOPS MiB/s Average min max 00:11:56.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31099.83 15.19 4115.65 1430.73 9896.46 00:11:56.940 ======================================================== 00:11:56.940 Total : 31099.83 15.19 4115.65 1430.73 9896.46 00:11:56.940 00:11:57.201 09:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.462 09:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:11:57.462 09:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:11:57.462 true 00:11:57.462 09:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 561113 00:11:57.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (561113) - No such process 00:11:57.462 09:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 561113 00:11:57.462 09:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.723 09:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:57.723 09:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:57.723 09:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:57.723 09:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:57.723 09:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:57.723 09:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:57.983 null0 00:11:57.983 09:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:57.983 09:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:57.983 09:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:57.983 null1 00:11:58.243 09:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:58.243 09:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:58.243 09:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:58.243 null2 00:11:58.243 09:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:58.243 09:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:58.243 09:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:58.503 null3 00:11:58.503 09:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:58.503 09:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:58.503 09:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:58.503 null4 00:11:58.503 09:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:58.503 09:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:58.503 09:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:58.763 null5 00:11:58.763 09:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:58.763 09:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:58.763 09:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:59.022 null6 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:59.022 null7 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.022 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 567797 567799 567802 567805 567808 567811 567814 567816 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.023 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:59.282 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.282 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:59.282 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:59.282 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.282 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:59.282 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:59.283 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.542 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.802 09:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:00.063 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.063 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:00.063 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:00.063 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:00.063 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:00.063 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:00.063 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:00.063 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.063 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.063 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:00.063 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.063 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.063 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:00.063 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:00.063 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.063 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.063 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:00.323 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.583 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.843 09:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:00.843 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.843 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.843 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:00.843 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:00.843 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.843 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.844 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:00.844 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.844 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.104 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:01.365 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.365 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.366 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:01.627 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.627 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:01.628 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.892 09:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:01.892 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.892 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.892 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:01.892 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:01.892 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:02.153 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:02.415 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:02.676 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:02.676 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:02.676 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:02.676 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:02.676 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:02.676 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:02.676 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:02.676 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:02.676 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:02.676 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:02.676 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:12:02.676 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:02.676 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:12:02.676 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:02.676 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:02.676 rmmod nvme_tcp 00:12:02.937 rmmod nvme_fabrics 00:12:02.937 rmmod nvme_keyring 00:12:02.937 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:02.937 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:12:02.937 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:12:02.937 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 560719 ']' 00:12:02.937 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 560719 00:12:02.937 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 560719 ']' 00:12:02.937 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 560719 00:12:02.937 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:12:02.937 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:02.937 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 560719 00:12:02.937 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:02.937 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:02.937 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 560719' 00:12:02.937 killing process with pid 560719 00:12:02.937 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 560719 00:12:02.937 09:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 560719 00:12:02.937 09:20:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:02.937 09:20:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:02.937 09:20:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:02.937 09:20:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:02.937 09:20:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:02.937 09:20:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.937 09:20:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.937 09:20:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.504 09:20:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:05.504 00:12:05.504 real 0m48.907s 00:12:05.504 user 3m15.752s 00:12:05.504 sys 0m17.381s 00:12:05.504 09:20:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:05.504 09:20:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.504 ************************************ 00:12:05.504 END TEST nvmf_ns_hotplug_stress 00:12:05.504 ************************************ 00:12:05.504 09:20:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:05.504 09:20:52 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:05.504 09:20:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:05.504 09:20:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.504 09:20:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:05.504 ************************************ 00:12:05.504 START TEST nvmf_connect_stress 00:12:05.504 ************************************ 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:05.504 * Looking for test storage... 00:12:05.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.504 09:20:52 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:12:05.505 09:20:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.648 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:13.648 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:13.648 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:13.648 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:13.648 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:13.648 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:13.648 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:13.648 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:13.648 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:13.648 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:12:13.648 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:13.648 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:12:13.648 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:13.648 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:13.648 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:13.648 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:13.648 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:13.649 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:13.649 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:13.649 Found net devices under 0000:31:00.0: cvl_0_0 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:13.649 Found net devices under 0000:31:00.1: cvl_0_1 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:13.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:13.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:12:13.649 00:12:13.649 --- 10.0.0.2 ping statistics --- 00:12:13.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.649 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:13.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:13.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:12:13.649 00:12:13.649 --- 10.0.0.1 ping statistics --- 00:12:13.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.649 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=573495 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 573495 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 573495 ']' 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:13.649 09:21:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.649 [2024-07-15 09:21:00.591834] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:12:13.649 [2024-07-15 09:21:00.591882] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.649 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.649 [2024-07-15 09:21:00.680812] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:13.649 [2024-07-15 09:21:00.765541] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.649 [2024-07-15 09:21:00.765604] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.649 [2024-07-15 09:21:00.765617] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.649 [2024-07-15 09:21:00.765624] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.650 [2024-07-15 09:21:00.765630] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.650 [2024-07-15 09:21:00.765804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.650 [2024-07-15 09:21:00.766056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.650 [2024-07-15 09:21:00.766056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.221 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:14.221 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:12:14.221 09:21:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:14.221 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:14.221 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.221 09:21:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.221 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:14.221 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.221 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.221 [2024-07-15 09:21:01.411798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.221 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.221 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:14.221 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.221 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.482 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.482 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.482 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.482 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.482 [2024-07-15 09:21:01.449898] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.482 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.482 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:14.482 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.482 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.482 NULL1 00:12:14.482 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.482 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=573598 00:12:14.482 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.483 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.744 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.744 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:14.744 09:21:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.744 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.744 09:21:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.315 09:21:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.315 09:21:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:15.315 09:21:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.315 09:21:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.315 09:21:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.575 09:21:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.575 09:21:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:15.575 09:21:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.575 09:21:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.575 09:21:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.836 09:21:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.836 09:21:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:15.836 09:21:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.836 09:21:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.836 09:21:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.097 09:21:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.097 09:21:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:16.097 09:21:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.097 09:21:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.097 09:21:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.368 09:21:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.368 09:21:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:16.368 09:21:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.368 09:21:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.368 09:21:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.677 09:21:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.677 09:21:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:16.678 09:21:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.678 09:21:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.678 09:21:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.276 09:21:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.276 09:21:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:17.276 09:21:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.276 09:21:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.276 09:21:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.537 09:21:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.537 09:21:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:17.537 09:21:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.537 09:21:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.537 09:21:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.798 09:21:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.798 09:21:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:17.798 09:21:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.798 09:21:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.798 09:21:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.058 09:21:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.058 09:21:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:18.058 09:21:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.058 09:21:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.058 09:21:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.318 09:21:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.318 09:21:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:18.319 09:21:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.319 09:21:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.319 09:21:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.890 09:21:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.890 09:21:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:18.890 09:21:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.890 09:21:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.890 09:21:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.151 09:21:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.151 09:21:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:19.151 09:21:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.151 09:21:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.151 09:21:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.412 09:21:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.412 09:21:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:19.412 09:21:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.412 09:21:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.412 09:21:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.673 09:21:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.673 09:21:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:19.674 09:21:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.674 09:21:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.674 09:21:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.934 09:21:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.934 09:21:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:19.934 09:21:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.934 09:21:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.934 09:21:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.507 09:21:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.507 09:21:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:20.507 09:21:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.507 09:21:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.507 09:21:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.767 09:21:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.767 09:21:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:20.767 09:21:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.767 09:21:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.767 09:21:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.026 09:21:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.026 09:21:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:21.026 09:21:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.026 09:21:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.026 09:21:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.286 09:21:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.286 09:21:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:21.286 09:21:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.286 09:21:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.286 09:21:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.546 09:21:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.546 09:21:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:21.546 09:21:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.546 09:21:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.546 09:21:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.114 09:21:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.114 09:21:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:22.114 09:21:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.114 09:21:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.114 09:21:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.373 09:21:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.373 09:21:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:22.373 09:21:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.373 09:21:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.373 09:21:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.634 09:21:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.634 09:21:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:22.634 09:21:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.634 09:21:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.634 09:21:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.895 09:21:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.895 09:21:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:22.895 09:21:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.895 09:21:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.895 09:21:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.155 09:21:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.155 09:21:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:23.155 09:21:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.155 09:21:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.155 09:21:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.725 09:21:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.725 09:21:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:23.725 09:21:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.725 09:21:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.725 09:21:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.985 09:21:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.985 09:21:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:23.985 09:21:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.985 09:21:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.985 09:21:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.245 09:21:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.245 09:21:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:24.245 09:21:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.245 09:21:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.245 09:21:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.506 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:24.506 09:21:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.506 09:21:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573598 00:12:24.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (573598) - No such process 00:12:24.506 09:21:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 573598 00:12:24.506 09:21:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:24.506 09:21:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:24.506 09:21:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:24.506 09:21:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:24.506 09:21:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:12:24.506 09:21:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:24.506 09:21:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:12:24.506 09:21:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:24.506 09:21:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:24.506 rmmod nvme_tcp 00:12:24.506 rmmod nvme_fabrics 00:12:24.506 rmmod nvme_keyring 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 573495 ']' 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 573495 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 573495 ']' 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 573495 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 573495 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 573495' 00:12:24.766 killing process with pid 573495 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 573495 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 573495 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.766 09:21:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.315 09:21:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:27.315 00:12:27.315 real 0m21.737s 00:12:27.315 user 0m42.369s 00:12:27.315 sys 0m9.332s 00:12:27.315 09:21:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:27.315 09:21:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.315 ************************************ 00:12:27.315 END TEST nvmf_connect_stress 00:12:27.315 ************************************ 00:12:27.315 09:21:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:27.315 09:21:14 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:27.315 09:21:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:27.315 09:21:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.315 09:21:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:27.315 ************************************ 00:12:27.315 START TEST nvmf_fused_ordering 00:12:27.315 ************************************ 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:27.315 * Looking for test storage... 00:12:27.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:12:27.315 09:21:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:35.461 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:35.461 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:35.461 Found net devices under 0000:31:00.0: cvl_0_0 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:35.461 Found net devices under 0000:31:00.1: cvl_0_1 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.461 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:35.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:12:35.462 00:12:35.462 --- 10.0.0.2 ping statistics --- 00:12:35.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.462 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:12:35.462 00:12:35.462 --- 10.0.0.1 ping statistics --- 00:12:35.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.462 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=580879 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 580879 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 580879 ']' 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:35.462 09:21:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:35.462 [2024-07-15 09:21:22.442862] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:12:35.462 [2024-07-15 09:21:22.442925] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.462 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.462 [2024-07-15 09:21:22.539006] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.462 [2024-07-15 09:21:22.634279] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.462 [2024-07-15 09:21:22.634339] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.462 [2024-07-15 09:21:22.634348] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.462 [2024-07-15 09:21:22.634361] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.462 [2024-07-15 09:21:22.634368] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.462 [2024-07-15 09:21:22.634398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.033 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:36.033 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:12:36.033 09:21:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:36.033 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:36.033 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.293 [2024-07-15 09:21:23.275497] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.293 [2024-07-15 09:21:23.299694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.293 NULL1 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.293 09:21:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:36.293 [2024-07-15 09:21:23.369902] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:12:36.293 [2024-07-15 09:21:23.369966] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid581119 ] 00:12:36.293 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.862 Attached to nqn.2016-06.io.spdk:cnode1 00:12:36.862 Namespace ID: 1 size: 1GB 00:12:36.862 fused_ordering(0) 00:12:36.862 fused_ordering(1) 00:12:36.862 fused_ordering(2) 00:12:36.862 fused_ordering(3) 00:12:36.862 fused_ordering(4) 00:12:36.862 fused_ordering(5) 00:12:36.862 fused_ordering(6) 00:12:36.862 fused_ordering(7) 00:12:36.862 fused_ordering(8) 00:12:36.862 fused_ordering(9) 00:12:36.862 fused_ordering(10) 00:12:36.862 fused_ordering(11) 00:12:36.862 fused_ordering(12) 00:12:36.862 fused_ordering(13) 00:12:36.862 fused_ordering(14) 00:12:36.862 fused_ordering(15) 00:12:36.862 fused_ordering(16) 00:12:36.862 fused_ordering(17) 00:12:36.862 fused_ordering(18) 00:12:36.862 fused_ordering(19) 00:12:36.862 fused_ordering(20) 00:12:36.862 fused_ordering(21) 00:12:36.862 fused_ordering(22) 00:12:36.862 fused_ordering(23) 00:12:36.862 fused_ordering(24) 00:12:36.862 fused_ordering(25) 00:12:36.862 fused_ordering(26) 00:12:36.862 fused_ordering(27) 00:12:36.862 fused_ordering(28) 00:12:36.862 fused_ordering(29) 00:12:36.862 fused_ordering(30) 00:12:36.862 fused_ordering(31) 00:12:36.862 fused_ordering(32) 00:12:36.862 fused_ordering(33) 00:12:36.862 fused_ordering(34) 00:12:36.862 fused_ordering(35) 00:12:36.862 fused_ordering(36) 00:12:36.862 fused_ordering(37) 00:12:36.862 fused_ordering(38) 00:12:36.862 fused_ordering(39) 00:12:36.862 fused_ordering(40) 00:12:36.862 fused_ordering(41) 00:12:36.862 fused_ordering(42) 00:12:36.862 fused_ordering(43) 00:12:36.862 fused_ordering(44) 00:12:36.862 fused_ordering(45) 00:12:36.862 fused_ordering(46) 00:12:36.862 fused_ordering(47) 00:12:36.862 fused_ordering(48) 00:12:36.862 fused_ordering(49) 00:12:36.862 fused_ordering(50) 00:12:36.862 fused_ordering(51) 00:12:36.862 fused_ordering(52) 00:12:36.862 fused_ordering(53) 00:12:36.862 fused_ordering(54) 00:12:36.862 fused_ordering(55) 00:12:36.862 fused_ordering(56) 00:12:36.862 fused_ordering(57) 00:12:36.862 fused_ordering(58) 00:12:36.862 fused_ordering(59) 00:12:36.862 fused_ordering(60) 00:12:36.862 fused_ordering(61) 00:12:36.862 fused_ordering(62) 00:12:36.862 fused_ordering(63) 00:12:36.862 fused_ordering(64) 00:12:36.862 fused_ordering(65) 00:12:36.862 fused_ordering(66) 00:12:36.862 fused_ordering(67) 00:12:36.862 fused_ordering(68) 00:12:36.862 fused_ordering(69) 00:12:36.862 fused_ordering(70) 00:12:36.862 fused_ordering(71) 00:12:36.862 fused_ordering(72) 00:12:36.862 fused_ordering(73) 00:12:36.862 fused_ordering(74) 00:12:36.862 fused_ordering(75) 00:12:36.862 fused_ordering(76) 00:12:36.862 fused_ordering(77) 00:12:36.862 fused_ordering(78) 00:12:36.862 fused_ordering(79) 00:12:36.862 fused_ordering(80) 00:12:36.862 fused_ordering(81) 00:12:36.862 fused_ordering(82) 00:12:36.862 fused_ordering(83) 00:12:36.862 fused_ordering(84) 00:12:36.862 fused_ordering(85) 00:12:36.862 fused_ordering(86) 00:12:36.862 fused_ordering(87) 00:12:36.862 fused_ordering(88) 00:12:36.862 fused_ordering(89) 00:12:36.862 fused_ordering(90) 00:12:36.862 fused_ordering(91) 00:12:36.862 fused_ordering(92) 00:12:36.862 fused_ordering(93) 00:12:36.862 fused_ordering(94) 00:12:36.862 fused_ordering(95) 00:12:36.862 fused_ordering(96) 00:12:36.862 fused_ordering(97) 00:12:36.862 fused_ordering(98) 00:12:36.862 fused_ordering(99) 00:12:36.862 fused_ordering(100) 00:12:36.862 fused_ordering(101) 00:12:36.862 fused_ordering(102) 00:12:36.862 fused_ordering(103) 00:12:36.862 fused_ordering(104) 00:12:36.862 fused_ordering(105) 00:12:36.862 fused_ordering(106) 00:12:36.862 fused_ordering(107) 00:12:36.862 fused_ordering(108) 00:12:36.862 fused_ordering(109) 00:12:36.862 fused_ordering(110) 00:12:36.862 fused_ordering(111) 00:12:36.862 fused_ordering(112) 00:12:36.862 fused_ordering(113) 00:12:36.862 fused_ordering(114) 00:12:36.862 fused_ordering(115) 00:12:36.862 fused_ordering(116) 00:12:36.862 fused_ordering(117) 00:12:36.862 fused_ordering(118) 00:12:36.862 fused_ordering(119) 00:12:36.862 fused_ordering(120) 00:12:36.862 fused_ordering(121) 00:12:36.862 fused_ordering(122) 00:12:36.862 fused_ordering(123) 00:12:36.862 fused_ordering(124) 00:12:36.862 fused_ordering(125) 00:12:36.862 fused_ordering(126) 00:12:36.863 fused_ordering(127) 00:12:36.863 fused_ordering(128) 00:12:36.863 fused_ordering(129) 00:12:36.863 fused_ordering(130) 00:12:36.863 fused_ordering(131) 00:12:36.863 fused_ordering(132) 00:12:36.863 fused_ordering(133) 00:12:36.863 fused_ordering(134) 00:12:36.863 fused_ordering(135) 00:12:36.863 fused_ordering(136) 00:12:36.863 fused_ordering(137) 00:12:36.863 fused_ordering(138) 00:12:36.863 fused_ordering(139) 00:12:36.863 fused_ordering(140) 00:12:36.863 fused_ordering(141) 00:12:36.863 fused_ordering(142) 00:12:36.863 fused_ordering(143) 00:12:36.863 fused_ordering(144) 00:12:36.863 fused_ordering(145) 00:12:36.863 fused_ordering(146) 00:12:36.863 fused_ordering(147) 00:12:36.863 fused_ordering(148) 00:12:36.863 fused_ordering(149) 00:12:36.863 fused_ordering(150) 00:12:36.863 fused_ordering(151) 00:12:36.863 fused_ordering(152) 00:12:36.863 fused_ordering(153) 00:12:36.863 fused_ordering(154) 00:12:36.863 fused_ordering(155) 00:12:36.863 fused_ordering(156) 00:12:36.863 fused_ordering(157) 00:12:36.863 fused_ordering(158) 00:12:36.863 fused_ordering(159) 00:12:36.863 fused_ordering(160) 00:12:36.863 fused_ordering(161) 00:12:36.863 fused_ordering(162) 00:12:36.863 fused_ordering(163) 00:12:36.863 fused_ordering(164) 00:12:36.863 fused_ordering(165) 00:12:36.863 fused_ordering(166) 00:12:36.863 fused_ordering(167) 00:12:36.863 fused_ordering(168) 00:12:36.863 fused_ordering(169) 00:12:36.863 fused_ordering(170) 00:12:36.863 fused_ordering(171) 00:12:36.863 fused_ordering(172) 00:12:36.863 fused_ordering(173) 00:12:36.863 fused_ordering(174) 00:12:36.863 fused_ordering(175) 00:12:36.863 fused_ordering(176) 00:12:36.863 fused_ordering(177) 00:12:36.863 fused_ordering(178) 00:12:36.863 fused_ordering(179) 00:12:36.863 fused_ordering(180) 00:12:36.863 fused_ordering(181) 00:12:36.863 fused_ordering(182) 00:12:36.863 fused_ordering(183) 00:12:36.863 fused_ordering(184) 00:12:36.863 fused_ordering(185) 00:12:36.863 fused_ordering(186) 00:12:36.863 fused_ordering(187) 00:12:36.863 fused_ordering(188) 00:12:36.863 fused_ordering(189) 00:12:36.863 fused_ordering(190) 00:12:36.863 fused_ordering(191) 00:12:36.863 fused_ordering(192) 00:12:36.863 fused_ordering(193) 00:12:36.863 fused_ordering(194) 00:12:36.863 fused_ordering(195) 00:12:36.863 fused_ordering(196) 00:12:36.863 fused_ordering(197) 00:12:36.863 fused_ordering(198) 00:12:36.863 fused_ordering(199) 00:12:36.863 fused_ordering(200) 00:12:36.863 fused_ordering(201) 00:12:36.863 fused_ordering(202) 00:12:36.863 fused_ordering(203) 00:12:36.863 fused_ordering(204) 00:12:36.863 fused_ordering(205) 00:12:37.123 fused_ordering(206) 00:12:37.123 fused_ordering(207) 00:12:37.123 fused_ordering(208) 00:12:37.123 fused_ordering(209) 00:12:37.123 fused_ordering(210) 00:12:37.123 fused_ordering(211) 00:12:37.123 fused_ordering(212) 00:12:37.123 fused_ordering(213) 00:12:37.123 fused_ordering(214) 00:12:37.123 fused_ordering(215) 00:12:37.123 fused_ordering(216) 00:12:37.123 fused_ordering(217) 00:12:37.123 fused_ordering(218) 00:12:37.123 fused_ordering(219) 00:12:37.123 fused_ordering(220) 00:12:37.123 fused_ordering(221) 00:12:37.123 fused_ordering(222) 00:12:37.123 fused_ordering(223) 00:12:37.123 fused_ordering(224) 00:12:37.123 fused_ordering(225) 00:12:37.123 fused_ordering(226) 00:12:37.123 fused_ordering(227) 00:12:37.123 fused_ordering(228) 00:12:37.123 fused_ordering(229) 00:12:37.123 fused_ordering(230) 00:12:37.123 fused_ordering(231) 00:12:37.123 fused_ordering(232) 00:12:37.123 fused_ordering(233) 00:12:37.123 fused_ordering(234) 00:12:37.123 fused_ordering(235) 00:12:37.123 fused_ordering(236) 00:12:37.123 fused_ordering(237) 00:12:37.123 fused_ordering(238) 00:12:37.123 fused_ordering(239) 00:12:37.123 fused_ordering(240) 00:12:37.123 fused_ordering(241) 00:12:37.123 fused_ordering(242) 00:12:37.123 fused_ordering(243) 00:12:37.123 fused_ordering(244) 00:12:37.123 fused_ordering(245) 00:12:37.123 fused_ordering(246) 00:12:37.123 fused_ordering(247) 00:12:37.123 fused_ordering(248) 00:12:37.123 fused_ordering(249) 00:12:37.123 fused_ordering(250) 00:12:37.123 fused_ordering(251) 00:12:37.123 fused_ordering(252) 00:12:37.123 fused_ordering(253) 00:12:37.123 fused_ordering(254) 00:12:37.123 fused_ordering(255) 00:12:37.123 fused_ordering(256) 00:12:37.123 fused_ordering(257) 00:12:37.123 fused_ordering(258) 00:12:37.123 fused_ordering(259) 00:12:37.123 fused_ordering(260) 00:12:37.123 fused_ordering(261) 00:12:37.123 fused_ordering(262) 00:12:37.123 fused_ordering(263) 00:12:37.123 fused_ordering(264) 00:12:37.123 fused_ordering(265) 00:12:37.123 fused_ordering(266) 00:12:37.123 fused_ordering(267) 00:12:37.123 fused_ordering(268) 00:12:37.123 fused_ordering(269) 00:12:37.123 fused_ordering(270) 00:12:37.123 fused_ordering(271) 00:12:37.123 fused_ordering(272) 00:12:37.123 fused_ordering(273) 00:12:37.123 fused_ordering(274) 00:12:37.123 fused_ordering(275) 00:12:37.123 fused_ordering(276) 00:12:37.123 fused_ordering(277) 00:12:37.123 fused_ordering(278) 00:12:37.123 fused_ordering(279) 00:12:37.123 fused_ordering(280) 00:12:37.123 fused_ordering(281) 00:12:37.123 fused_ordering(282) 00:12:37.123 fused_ordering(283) 00:12:37.123 fused_ordering(284) 00:12:37.123 fused_ordering(285) 00:12:37.123 fused_ordering(286) 00:12:37.123 fused_ordering(287) 00:12:37.123 fused_ordering(288) 00:12:37.123 fused_ordering(289) 00:12:37.123 fused_ordering(290) 00:12:37.123 fused_ordering(291) 00:12:37.123 fused_ordering(292) 00:12:37.123 fused_ordering(293) 00:12:37.123 fused_ordering(294) 00:12:37.123 fused_ordering(295) 00:12:37.123 fused_ordering(296) 00:12:37.123 fused_ordering(297) 00:12:37.123 fused_ordering(298) 00:12:37.123 fused_ordering(299) 00:12:37.123 fused_ordering(300) 00:12:37.123 fused_ordering(301) 00:12:37.123 fused_ordering(302) 00:12:37.123 fused_ordering(303) 00:12:37.123 fused_ordering(304) 00:12:37.123 fused_ordering(305) 00:12:37.123 fused_ordering(306) 00:12:37.123 fused_ordering(307) 00:12:37.123 fused_ordering(308) 00:12:37.123 fused_ordering(309) 00:12:37.123 fused_ordering(310) 00:12:37.123 fused_ordering(311) 00:12:37.123 fused_ordering(312) 00:12:37.123 fused_ordering(313) 00:12:37.123 fused_ordering(314) 00:12:37.123 fused_ordering(315) 00:12:37.123 fused_ordering(316) 00:12:37.123 fused_ordering(317) 00:12:37.123 fused_ordering(318) 00:12:37.123 fused_ordering(319) 00:12:37.123 fused_ordering(320) 00:12:37.123 fused_ordering(321) 00:12:37.123 fused_ordering(322) 00:12:37.123 fused_ordering(323) 00:12:37.123 fused_ordering(324) 00:12:37.123 fused_ordering(325) 00:12:37.123 fused_ordering(326) 00:12:37.123 fused_ordering(327) 00:12:37.123 fused_ordering(328) 00:12:37.123 fused_ordering(329) 00:12:37.123 fused_ordering(330) 00:12:37.123 fused_ordering(331) 00:12:37.123 fused_ordering(332) 00:12:37.123 fused_ordering(333) 00:12:37.123 fused_ordering(334) 00:12:37.123 fused_ordering(335) 00:12:37.123 fused_ordering(336) 00:12:37.123 fused_ordering(337) 00:12:37.123 fused_ordering(338) 00:12:37.123 fused_ordering(339) 00:12:37.123 fused_ordering(340) 00:12:37.123 fused_ordering(341) 00:12:37.123 fused_ordering(342) 00:12:37.123 fused_ordering(343) 00:12:37.123 fused_ordering(344) 00:12:37.123 fused_ordering(345) 00:12:37.123 fused_ordering(346) 00:12:37.123 fused_ordering(347) 00:12:37.123 fused_ordering(348) 00:12:37.123 fused_ordering(349) 00:12:37.123 fused_ordering(350) 00:12:37.123 fused_ordering(351) 00:12:37.123 fused_ordering(352) 00:12:37.123 fused_ordering(353) 00:12:37.123 fused_ordering(354) 00:12:37.123 fused_ordering(355) 00:12:37.123 fused_ordering(356) 00:12:37.123 fused_ordering(357) 00:12:37.123 fused_ordering(358) 00:12:37.123 fused_ordering(359) 00:12:37.123 fused_ordering(360) 00:12:37.123 fused_ordering(361) 00:12:37.123 fused_ordering(362) 00:12:37.123 fused_ordering(363) 00:12:37.123 fused_ordering(364) 00:12:37.123 fused_ordering(365) 00:12:37.123 fused_ordering(366) 00:12:37.123 fused_ordering(367) 00:12:37.123 fused_ordering(368) 00:12:37.123 fused_ordering(369) 00:12:37.123 fused_ordering(370) 00:12:37.123 fused_ordering(371) 00:12:37.123 fused_ordering(372) 00:12:37.123 fused_ordering(373) 00:12:37.123 fused_ordering(374) 00:12:37.123 fused_ordering(375) 00:12:37.123 fused_ordering(376) 00:12:37.123 fused_ordering(377) 00:12:37.123 fused_ordering(378) 00:12:37.123 fused_ordering(379) 00:12:37.123 fused_ordering(380) 00:12:37.123 fused_ordering(381) 00:12:37.123 fused_ordering(382) 00:12:37.123 fused_ordering(383) 00:12:37.123 fused_ordering(384) 00:12:37.123 fused_ordering(385) 00:12:37.123 fused_ordering(386) 00:12:37.123 fused_ordering(387) 00:12:37.123 fused_ordering(388) 00:12:37.123 fused_ordering(389) 00:12:37.123 fused_ordering(390) 00:12:37.123 fused_ordering(391) 00:12:37.123 fused_ordering(392) 00:12:37.123 fused_ordering(393) 00:12:37.123 fused_ordering(394) 00:12:37.123 fused_ordering(395) 00:12:37.123 fused_ordering(396) 00:12:37.123 fused_ordering(397) 00:12:37.123 fused_ordering(398) 00:12:37.123 fused_ordering(399) 00:12:37.123 fused_ordering(400) 00:12:37.123 fused_ordering(401) 00:12:37.123 fused_ordering(402) 00:12:37.123 fused_ordering(403) 00:12:37.123 fused_ordering(404) 00:12:37.123 fused_ordering(405) 00:12:37.123 fused_ordering(406) 00:12:37.123 fused_ordering(407) 00:12:37.123 fused_ordering(408) 00:12:37.123 fused_ordering(409) 00:12:37.123 fused_ordering(410) 00:12:37.693 fused_ordering(411) 00:12:37.693 fused_ordering(412) 00:12:37.693 fused_ordering(413) 00:12:37.693 fused_ordering(414) 00:12:37.693 fused_ordering(415) 00:12:37.693 fused_ordering(416) 00:12:37.693 fused_ordering(417) 00:12:37.693 fused_ordering(418) 00:12:37.693 fused_ordering(419) 00:12:37.693 fused_ordering(420) 00:12:37.693 fused_ordering(421) 00:12:37.693 fused_ordering(422) 00:12:37.693 fused_ordering(423) 00:12:37.693 fused_ordering(424) 00:12:37.693 fused_ordering(425) 00:12:37.693 fused_ordering(426) 00:12:37.693 fused_ordering(427) 00:12:37.693 fused_ordering(428) 00:12:37.693 fused_ordering(429) 00:12:37.693 fused_ordering(430) 00:12:37.693 fused_ordering(431) 00:12:37.693 fused_ordering(432) 00:12:37.693 fused_ordering(433) 00:12:37.693 fused_ordering(434) 00:12:37.693 fused_ordering(435) 00:12:37.693 fused_ordering(436) 00:12:37.693 fused_ordering(437) 00:12:37.693 fused_ordering(438) 00:12:37.693 fused_ordering(439) 00:12:37.693 fused_ordering(440) 00:12:37.693 fused_ordering(441) 00:12:37.693 fused_ordering(442) 00:12:37.693 fused_ordering(443) 00:12:37.693 fused_ordering(444) 00:12:37.693 fused_ordering(445) 00:12:37.693 fused_ordering(446) 00:12:37.693 fused_ordering(447) 00:12:37.693 fused_ordering(448) 00:12:37.693 fused_ordering(449) 00:12:37.693 fused_ordering(450) 00:12:37.693 fused_ordering(451) 00:12:37.693 fused_ordering(452) 00:12:37.693 fused_ordering(453) 00:12:37.693 fused_ordering(454) 00:12:37.693 fused_ordering(455) 00:12:37.694 fused_ordering(456) 00:12:37.694 fused_ordering(457) 00:12:37.694 fused_ordering(458) 00:12:37.694 fused_ordering(459) 00:12:37.694 fused_ordering(460) 00:12:37.694 fused_ordering(461) 00:12:37.694 fused_ordering(462) 00:12:37.694 fused_ordering(463) 00:12:37.694 fused_ordering(464) 00:12:37.694 fused_ordering(465) 00:12:37.694 fused_ordering(466) 00:12:37.694 fused_ordering(467) 00:12:37.694 fused_ordering(468) 00:12:37.694 fused_ordering(469) 00:12:37.694 fused_ordering(470) 00:12:37.694 fused_ordering(471) 00:12:37.694 fused_ordering(472) 00:12:37.694 fused_ordering(473) 00:12:37.694 fused_ordering(474) 00:12:37.694 fused_ordering(475) 00:12:37.694 fused_ordering(476) 00:12:37.694 fused_ordering(477) 00:12:37.694 fused_ordering(478) 00:12:37.694 fused_ordering(479) 00:12:37.694 fused_ordering(480) 00:12:37.694 fused_ordering(481) 00:12:37.694 fused_ordering(482) 00:12:37.694 fused_ordering(483) 00:12:37.694 fused_ordering(484) 00:12:37.694 fused_ordering(485) 00:12:37.694 fused_ordering(486) 00:12:37.694 fused_ordering(487) 00:12:37.694 fused_ordering(488) 00:12:37.694 fused_ordering(489) 00:12:37.694 fused_ordering(490) 00:12:37.694 fused_ordering(491) 00:12:37.694 fused_ordering(492) 00:12:37.694 fused_ordering(493) 00:12:37.694 fused_ordering(494) 00:12:37.694 fused_ordering(495) 00:12:37.694 fused_ordering(496) 00:12:37.694 fused_ordering(497) 00:12:37.694 fused_ordering(498) 00:12:37.694 fused_ordering(499) 00:12:37.694 fused_ordering(500) 00:12:37.694 fused_ordering(501) 00:12:37.694 fused_ordering(502) 00:12:37.694 fused_ordering(503) 00:12:37.694 fused_ordering(504) 00:12:37.694 fused_ordering(505) 00:12:37.694 fused_ordering(506) 00:12:37.694 fused_ordering(507) 00:12:37.694 fused_ordering(508) 00:12:37.694 fused_ordering(509) 00:12:37.694 fused_ordering(510) 00:12:37.694 fused_ordering(511) 00:12:37.694 fused_ordering(512) 00:12:37.694 fused_ordering(513) 00:12:37.694 fused_ordering(514) 00:12:37.694 fused_ordering(515) 00:12:37.694 fused_ordering(516) 00:12:37.694 fused_ordering(517) 00:12:37.694 fused_ordering(518) 00:12:37.694 fused_ordering(519) 00:12:37.694 fused_ordering(520) 00:12:37.694 fused_ordering(521) 00:12:37.694 fused_ordering(522) 00:12:37.694 fused_ordering(523) 00:12:37.694 fused_ordering(524) 00:12:37.694 fused_ordering(525) 00:12:37.694 fused_ordering(526) 00:12:37.694 fused_ordering(527) 00:12:37.694 fused_ordering(528) 00:12:37.694 fused_ordering(529) 00:12:37.694 fused_ordering(530) 00:12:37.694 fused_ordering(531) 00:12:37.694 fused_ordering(532) 00:12:37.694 fused_ordering(533) 00:12:37.694 fused_ordering(534) 00:12:37.694 fused_ordering(535) 00:12:37.694 fused_ordering(536) 00:12:37.694 fused_ordering(537) 00:12:37.694 fused_ordering(538) 00:12:37.694 fused_ordering(539) 00:12:37.694 fused_ordering(540) 00:12:37.694 fused_ordering(541) 00:12:37.694 fused_ordering(542) 00:12:37.694 fused_ordering(543) 00:12:37.694 fused_ordering(544) 00:12:37.694 fused_ordering(545) 00:12:37.694 fused_ordering(546) 00:12:37.694 fused_ordering(547) 00:12:37.694 fused_ordering(548) 00:12:37.694 fused_ordering(549) 00:12:37.694 fused_ordering(550) 00:12:37.694 fused_ordering(551) 00:12:37.694 fused_ordering(552) 00:12:37.694 fused_ordering(553) 00:12:37.694 fused_ordering(554) 00:12:37.694 fused_ordering(555) 00:12:37.694 fused_ordering(556) 00:12:37.694 fused_ordering(557) 00:12:37.694 fused_ordering(558) 00:12:37.694 fused_ordering(559) 00:12:37.694 fused_ordering(560) 00:12:37.694 fused_ordering(561) 00:12:37.694 fused_ordering(562) 00:12:37.694 fused_ordering(563) 00:12:37.694 fused_ordering(564) 00:12:37.694 fused_ordering(565) 00:12:37.694 fused_ordering(566) 00:12:37.694 fused_ordering(567) 00:12:37.694 fused_ordering(568) 00:12:37.694 fused_ordering(569) 00:12:37.694 fused_ordering(570) 00:12:37.694 fused_ordering(571) 00:12:37.694 fused_ordering(572) 00:12:37.694 fused_ordering(573) 00:12:37.694 fused_ordering(574) 00:12:37.694 fused_ordering(575) 00:12:37.694 fused_ordering(576) 00:12:37.694 fused_ordering(577) 00:12:37.694 fused_ordering(578) 00:12:37.694 fused_ordering(579) 00:12:37.694 fused_ordering(580) 00:12:37.694 fused_ordering(581) 00:12:37.694 fused_ordering(582) 00:12:37.694 fused_ordering(583) 00:12:37.694 fused_ordering(584) 00:12:37.694 fused_ordering(585) 00:12:37.694 fused_ordering(586) 00:12:37.694 fused_ordering(587) 00:12:37.694 fused_ordering(588) 00:12:37.694 fused_ordering(589) 00:12:37.694 fused_ordering(590) 00:12:37.694 fused_ordering(591) 00:12:37.694 fused_ordering(592) 00:12:37.694 fused_ordering(593) 00:12:37.694 fused_ordering(594) 00:12:37.694 fused_ordering(595) 00:12:37.694 fused_ordering(596) 00:12:37.694 fused_ordering(597) 00:12:37.694 fused_ordering(598) 00:12:37.694 fused_ordering(599) 00:12:37.694 fused_ordering(600) 00:12:37.694 fused_ordering(601) 00:12:37.694 fused_ordering(602) 00:12:37.694 fused_ordering(603) 00:12:37.694 fused_ordering(604) 00:12:37.694 fused_ordering(605) 00:12:37.694 fused_ordering(606) 00:12:37.694 fused_ordering(607) 00:12:37.694 fused_ordering(608) 00:12:37.694 fused_ordering(609) 00:12:37.694 fused_ordering(610) 00:12:37.694 fused_ordering(611) 00:12:37.694 fused_ordering(612) 00:12:37.694 fused_ordering(613) 00:12:37.694 fused_ordering(614) 00:12:37.694 fused_ordering(615) 00:12:37.954 fused_ordering(616) 00:12:37.954 fused_ordering(617) 00:12:37.954 fused_ordering(618) 00:12:37.954 fused_ordering(619) 00:12:37.954 fused_ordering(620) 00:12:37.954 fused_ordering(621) 00:12:37.954 fused_ordering(622) 00:12:37.954 fused_ordering(623) 00:12:37.954 fused_ordering(624) 00:12:37.954 fused_ordering(625) 00:12:37.954 fused_ordering(626) 00:12:37.954 fused_ordering(627) 00:12:37.954 fused_ordering(628) 00:12:37.954 fused_ordering(629) 00:12:37.955 fused_ordering(630) 00:12:37.955 fused_ordering(631) 00:12:37.955 fused_ordering(632) 00:12:37.955 fused_ordering(633) 00:12:37.955 fused_ordering(634) 00:12:37.955 fused_ordering(635) 00:12:37.955 fused_ordering(636) 00:12:37.955 fused_ordering(637) 00:12:37.955 fused_ordering(638) 00:12:37.955 fused_ordering(639) 00:12:37.955 fused_ordering(640) 00:12:37.955 fused_ordering(641) 00:12:37.955 fused_ordering(642) 00:12:37.955 fused_ordering(643) 00:12:37.955 fused_ordering(644) 00:12:37.955 fused_ordering(645) 00:12:37.955 fused_ordering(646) 00:12:37.955 fused_ordering(647) 00:12:37.955 fused_ordering(648) 00:12:37.955 fused_ordering(649) 00:12:37.955 fused_ordering(650) 00:12:37.955 fused_ordering(651) 00:12:37.955 fused_ordering(652) 00:12:37.955 fused_ordering(653) 00:12:37.955 fused_ordering(654) 00:12:37.955 fused_ordering(655) 00:12:37.955 fused_ordering(656) 00:12:37.955 fused_ordering(657) 00:12:37.955 fused_ordering(658) 00:12:37.955 fused_ordering(659) 00:12:37.955 fused_ordering(660) 00:12:37.955 fused_ordering(661) 00:12:37.955 fused_ordering(662) 00:12:37.955 fused_ordering(663) 00:12:37.955 fused_ordering(664) 00:12:37.955 fused_ordering(665) 00:12:37.955 fused_ordering(666) 00:12:37.955 fused_ordering(667) 00:12:37.955 fused_ordering(668) 00:12:37.955 fused_ordering(669) 00:12:37.955 fused_ordering(670) 00:12:37.955 fused_ordering(671) 00:12:37.955 fused_ordering(672) 00:12:37.955 fused_ordering(673) 00:12:37.955 fused_ordering(674) 00:12:37.955 fused_ordering(675) 00:12:37.955 fused_ordering(676) 00:12:37.955 fused_ordering(677) 00:12:37.955 fused_ordering(678) 00:12:37.955 fused_ordering(679) 00:12:37.955 fused_ordering(680) 00:12:37.955 fused_ordering(681) 00:12:37.955 fused_ordering(682) 00:12:37.955 fused_ordering(683) 00:12:37.955 fused_ordering(684) 00:12:37.955 fused_ordering(685) 00:12:37.955 fused_ordering(686) 00:12:37.955 fused_ordering(687) 00:12:37.955 fused_ordering(688) 00:12:37.955 fused_ordering(689) 00:12:37.955 fused_ordering(690) 00:12:37.955 fused_ordering(691) 00:12:37.955 fused_ordering(692) 00:12:37.955 fused_ordering(693) 00:12:37.955 fused_ordering(694) 00:12:37.955 fused_ordering(695) 00:12:37.955 fused_ordering(696) 00:12:37.955 fused_ordering(697) 00:12:37.955 fused_ordering(698) 00:12:37.955 fused_ordering(699) 00:12:37.955 fused_ordering(700) 00:12:37.955 fused_ordering(701) 00:12:37.955 fused_ordering(702) 00:12:37.955 fused_ordering(703) 00:12:37.955 fused_ordering(704) 00:12:37.955 fused_ordering(705) 00:12:37.955 fused_ordering(706) 00:12:37.955 fused_ordering(707) 00:12:37.955 fused_ordering(708) 00:12:37.955 fused_ordering(709) 00:12:37.955 fused_ordering(710) 00:12:37.955 fused_ordering(711) 00:12:37.955 fused_ordering(712) 00:12:37.955 fused_ordering(713) 00:12:37.955 fused_ordering(714) 00:12:37.955 fused_ordering(715) 00:12:37.955 fused_ordering(716) 00:12:37.955 fused_ordering(717) 00:12:37.955 fused_ordering(718) 00:12:37.955 fused_ordering(719) 00:12:37.955 fused_ordering(720) 00:12:37.955 fused_ordering(721) 00:12:37.955 fused_ordering(722) 00:12:37.955 fused_ordering(723) 00:12:37.955 fused_ordering(724) 00:12:37.955 fused_ordering(725) 00:12:37.955 fused_ordering(726) 00:12:37.955 fused_ordering(727) 00:12:37.955 fused_ordering(728) 00:12:37.955 fused_ordering(729) 00:12:37.955 fused_ordering(730) 00:12:37.955 fused_ordering(731) 00:12:37.955 fused_ordering(732) 00:12:37.955 fused_ordering(733) 00:12:37.955 fused_ordering(734) 00:12:37.955 fused_ordering(735) 00:12:37.955 fused_ordering(736) 00:12:37.955 fused_ordering(737) 00:12:37.955 fused_ordering(738) 00:12:37.955 fused_ordering(739) 00:12:37.955 fused_ordering(740) 00:12:37.955 fused_ordering(741) 00:12:37.955 fused_ordering(742) 00:12:37.955 fused_ordering(743) 00:12:37.955 fused_ordering(744) 00:12:37.955 fused_ordering(745) 00:12:37.955 fused_ordering(746) 00:12:37.955 fused_ordering(747) 00:12:37.955 fused_ordering(748) 00:12:37.955 fused_ordering(749) 00:12:37.955 fused_ordering(750) 00:12:37.955 fused_ordering(751) 00:12:37.955 fused_ordering(752) 00:12:37.955 fused_ordering(753) 00:12:37.955 fused_ordering(754) 00:12:37.955 fused_ordering(755) 00:12:37.955 fused_ordering(756) 00:12:37.955 fused_ordering(757) 00:12:37.955 fused_ordering(758) 00:12:37.955 fused_ordering(759) 00:12:37.955 fused_ordering(760) 00:12:37.955 fused_ordering(761) 00:12:37.955 fused_ordering(762) 00:12:37.955 fused_ordering(763) 00:12:37.955 fused_ordering(764) 00:12:37.955 fused_ordering(765) 00:12:37.955 fused_ordering(766) 00:12:37.955 fused_ordering(767) 00:12:37.955 fused_ordering(768) 00:12:37.955 fused_ordering(769) 00:12:37.955 fused_ordering(770) 00:12:37.955 fused_ordering(771) 00:12:37.955 fused_ordering(772) 00:12:37.955 fused_ordering(773) 00:12:37.955 fused_ordering(774) 00:12:37.955 fused_ordering(775) 00:12:37.955 fused_ordering(776) 00:12:37.955 fused_ordering(777) 00:12:37.955 fused_ordering(778) 00:12:37.955 fused_ordering(779) 00:12:37.955 fused_ordering(780) 00:12:37.955 fused_ordering(781) 00:12:37.955 fused_ordering(782) 00:12:37.955 fused_ordering(783) 00:12:37.955 fused_ordering(784) 00:12:37.955 fused_ordering(785) 00:12:37.955 fused_ordering(786) 00:12:37.955 fused_ordering(787) 00:12:37.955 fused_ordering(788) 00:12:37.955 fused_ordering(789) 00:12:37.955 fused_ordering(790) 00:12:37.955 fused_ordering(791) 00:12:37.955 fused_ordering(792) 00:12:37.955 fused_ordering(793) 00:12:37.955 fused_ordering(794) 00:12:37.955 fused_ordering(795) 00:12:37.955 fused_ordering(796) 00:12:37.955 fused_ordering(797) 00:12:37.955 fused_ordering(798) 00:12:37.955 fused_ordering(799) 00:12:37.955 fused_ordering(800) 00:12:37.955 fused_ordering(801) 00:12:37.955 fused_ordering(802) 00:12:37.955 fused_ordering(803) 00:12:37.955 fused_ordering(804) 00:12:37.955 fused_ordering(805) 00:12:37.955 fused_ordering(806) 00:12:37.955 fused_ordering(807) 00:12:37.955 fused_ordering(808) 00:12:37.955 fused_ordering(809) 00:12:37.955 fused_ordering(810) 00:12:37.955 fused_ordering(811) 00:12:37.955 fused_ordering(812) 00:12:37.955 fused_ordering(813) 00:12:37.955 fused_ordering(814) 00:12:37.955 fused_ordering(815) 00:12:37.955 fused_ordering(816) 00:12:37.955 fused_ordering(817) 00:12:37.955 fused_ordering(818) 00:12:37.955 fused_ordering(819) 00:12:37.955 fused_ordering(820) 00:12:38.896 fused_ordering(821) 00:12:38.896 fused_ordering(822) 00:12:38.896 fused_ordering(823) 00:12:38.896 fused_ordering(824) 00:12:38.896 fused_ordering(825) 00:12:38.896 fused_ordering(826) 00:12:38.896 fused_ordering(827) 00:12:38.896 fused_ordering(828) 00:12:38.896 fused_ordering(829) 00:12:38.896 fused_ordering(830) 00:12:38.896 fused_ordering(831) 00:12:38.896 fused_ordering(832) 00:12:38.896 fused_ordering(833) 00:12:38.896 fused_ordering(834) 00:12:38.896 fused_ordering(835) 00:12:38.896 fused_ordering(836) 00:12:38.896 fused_ordering(837) 00:12:38.896 fused_ordering(838) 00:12:38.896 fused_ordering(839) 00:12:38.896 fused_ordering(840) 00:12:38.896 fused_ordering(841) 00:12:38.896 fused_ordering(842) 00:12:38.896 fused_ordering(843) 00:12:38.896 fused_ordering(844) 00:12:38.896 fused_ordering(845) 00:12:38.896 fused_ordering(846) 00:12:38.896 fused_ordering(847) 00:12:38.896 fused_ordering(848) 00:12:38.896 fused_ordering(849) 00:12:38.896 fused_ordering(850) 00:12:38.896 fused_ordering(851) 00:12:38.896 fused_ordering(852) 00:12:38.896 fused_ordering(853) 00:12:38.897 fused_ordering(854) 00:12:38.897 fused_ordering(855) 00:12:38.897 fused_ordering(856) 00:12:38.897 fused_ordering(857) 00:12:38.897 fused_ordering(858) 00:12:38.897 fused_ordering(859) 00:12:38.897 fused_ordering(860) 00:12:38.897 fused_ordering(861) 00:12:38.897 fused_ordering(862) 00:12:38.897 fused_ordering(863) 00:12:38.897 fused_ordering(864) 00:12:38.897 fused_ordering(865) 00:12:38.897 fused_ordering(866) 00:12:38.897 fused_ordering(867) 00:12:38.897 fused_ordering(868) 00:12:38.897 fused_ordering(869) 00:12:38.897 fused_ordering(870) 00:12:38.897 fused_ordering(871) 00:12:38.897 fused_ordering(872) 00:12:38.897 fused_ordering(873) 00:12:38.897 fused_ordering(874) 00:12:38.897 fused_ordering(875) 00:12:38.897 fused_ordering(876) 00:12:38.897 fused_ordering(877) 00:12:38.897 fused_ordering(878) 00:12:38.897 fused_ordering(879) 00:12:38.897 fused_ordering(880) 00:12:38.897 fused_ordering(881) 00:12:38.897 fused_ordering(882) 00:12:38.897 fused_ordering(883) 00:12:38.897 fused_ordering(884) 00:12:38.897 fused_ordering(885) 00:12:38.897 fused_ordering(886) 00:12:38.897 fused_ordering(887) 00:12:38.897 fused_ordering(888) 00:12:38.897 fused_ordering(889) 00:12:38.897 fused_ordering(890) 00:12:38.897 fused_ordering(891) 00:12:38.897 fused_ordering(892) 00:12:38.897 fused_ordering(893) 00:12:38.897 fused_ordering(894) 00:12:38.897 fused_ordering(895) 00:12:38.897 fused_ordering(896) 00:12:38.897 fused_ordering(897) 00:12:38.897 fused_ordering(898) 00:12:38.897 fused_ordering(899) 00:12:38.897 fused_ordering(900) 00:12:38.897 fused_ordering(901) 00:12:38.897 fused_ordering(902) 00:12:38.897 fused_ordering(903) 00:12:38.897 fused_ordering(904) 00:12:38.897 fused_ordering(905) 00:12:38.897 fused_ordering(906) 00:12:38.897 fused_ordering(907) 00:12:38.897 fused_ordering(908) 00:12:38.897 fused_ordering(909) 00:12:38.897 fused_ordering(910) 00:12:38.897 fused_ordering(911) 00:12:38.897 fused_ordering(912) 00:12:38.897 fused_ordering(913) 00:12:38.897 fused_ordering(914) 00:12:38.897 fused_ordering(915) 00:12:38.897 fused_ordering(916) 00:12:38.897 fused_ordering(917) 00:12:38.897 fused_ordering(918) 00:12:38.897 fused_ordering(919) 00:12:38.897 fused_ordering(920) 00:12:38.897 fused_ordering(921) 00:12:38.897 fused_ordering(922) 00:12:38.897 fused_ordering(923) 00:12:38.897 fused_ordering(924) 00:12:38.897 fused_ordering(925) 00:12:38.897 fused_ordering(926) 00:12:38.897 fused_ordering(927) 00:12:38.897 fused_ordering(928) 00:12:38.897 fused_ordering(929) 00:12:38.897 fused_ordering(930) 00:12:38.897 fused_ordering(931) 00:12:38.897 fused_ordering(932) 00:12:38.897 fused_ordering(933) 00:12:38.897 fused_ordering(934) 00:12:38.897 fused_ordering(935) 00:12:38.897 fused_ordering(936) 00:12:38.897 fused_ordering(937) 00:12:38.897 fused_ordering(938) 00:12:38.897 fused_ordering(939) 00:12:38.897 fused_ordering(940) 00:12:38.897 fused_ordering(941) 00:12:38.897 fused_ordering(942) 00:12:38.897 fused_ordering(943) 00:12:38.897 fused_ordering(944) 00:12:38.897 fused_ordering(945) 00:12:38.897 fused_ordering(946) 00:12:38.897 fused_ordering(947) 00:12:38.897 fused_ordering(948) 00:12:38.897 fused_ordering(949) 00:12:38.897 fused_ordering(950) 00:12:38.897 fused_ordering(951) 00:12:38.897 fused_ordering(952) 00:12:38.897 fused_ordering(953) 00:12:38.897 fused_ordering(954) 00:12:38.897 fused_ordering(955) 00:12:38.897 fused_ordering(956) 00:12:38.897 fused_ordering(957) 00:12:38.897 fused_ordering(958) 00:12:38.897 fused_ordering(959) 00:12:38.897 fused_ordering(960) 00:12:38.897 fused_ordering(961) 00:12:38.897 fused_ordering(962) 00:12:38.897 fused_ordering(963) 00:12:38.897 fused_ordering(964) 00:12:38.897 fused_ordering(965) 00:12:38.897 fused_ordering(966) 00:12:38.897 fused_ordering(967) 00:12:38.897 fused_ordering(968) 00:12:38.897 fused_ordering(969) 00:12:38.897 fused_ordering(970) 00:12:38.897 fused_ordering(971) 00:12:38.897 fused_ordering(972) 00:12:38.897 fused_ordering(973) 00:12:38.897 fused_ordering(974) 00:12:38.897 fused_ordering(975) 00:12:38.897 fused_ordering(976) 00:12:38.897 fused_ordering(977) 00:12:38.897 fused_ordering(978) 00:12:38.897 fused_ordering(979) 00:12:38.897 fused_ordering(980) 00:12:38.897 fused_ordering(981) 00:12:38.897 fused_ordering(982) 00:12:38.897 fused_ordering(983) 00:12:38.897 fused_ordering(984) 00:12:38.897 fused_ordering(985) 00:12:38.897 fused_ordering(986) 00:12:38.897 fused_ordering(987) 00:12:38.897 fused_ordering(988) 00:12:38.897 fused_ordering(989) 00:12:38.897 fused_ordering(990) 00:12:38.897 fused_ordering(991) 00:12:38.897 fused_ordering(992) 00:12:38.897 fused_ordering(993) 00:12:38.897 fused_ordering(994) 00:12:38.897 fused_ordering(995) 00:12:38.897 fused_ordering(996) 00:12:38.897 fused_ordering(997) 00:12:38.897 fused_ordering(998) 00:12:38.897 fused_ordering(999) 00:12:38.897 fused_ordering(1000) 00:12:38.897 fused_ordering(1001) 00:12:38.897 fused_ordering(1002) 00:12:38.897 fused_ordering(1003) 00:12:38.897 fused_ordering(1004) 00:12:38.897 fused_ordering(1005) 00:12:38.897 fused_ordering(1006) 00:12:38.897 fused_ordering(1007) 00:12:38.897 fused_ordering(1008) 00:12:38.897 fused_ordering(1009) 00:12:38.897 fused_ordering(1010) 00:12:38.897 fused_ordering(1011) 00:12:38.897 fused_ordering(1012) 00:12:38.897 fused_ordering(1013) 00:12:38.897 fused_ordering(1014) 00:12:38.897 fused_ordering(1015) 00:12:38.897 fused_ordering(1016) 00:12:38.897 fused_ordering(1017) 00:12:38.897 fused_ordering(1018) 00:12:38.897 fused_ordering(1019) 00:12:38.897 fused_ordering(1020) 00:12:38.897 fused_ordering(1021) 00:12:38.897 fused_ordering(1022) 00:12:38.897 fused_ordering(1023) 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:38.897 rmmod nvme_tcp 00:12:38.897 rmmod nvme_fabrics 00:12:38.897 rmmod nvme_keyring 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 580879 ']' 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 580879 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 580879 ']' 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 580879 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 580879 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 580879' 00:12:38.897 killing process with pid 580879 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 580879 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 580879 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.897 09:21:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.440 09:21:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:41.440 00:12:41.440 real 0m14.003s 00:12:41.440 user 0m7.399s 00:12:41.440 sys 0m7.366s 00:12:41.440 09:21:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:41.440 09:21:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.440 ************************************ 00:12:41.440 END TEST nvmf_fused_ordering 00:12:41.440 ************************************ 00:12:41.440 09:21:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:41.440 09:21:28 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:41.440 09:21:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:41.440 09:21:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.440 09:21:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:41.440 ************************************ 00:12:41.440 START TEST nvmf_delete_subsystem 00:12:41.440 ************************************ 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:41.440 * Looking for test storage... 00:12:41.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.440 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:41.441 09:21:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:49.585 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:49.585 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:49.585 Found net devices under 0000:31:00.0: cvl_0_0 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.585 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:49.585 Found net devices under 0000:31:00.1: cvl_0_1 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:49.586 09:21:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:49.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:12:49.586 00:12:49.586 --- 10.0.0.2 ping statistics --- 00:12:49.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.586 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:49.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:12:49.586 00:12:49.586 --- 10.0.0.1 ping statistics --- 00:12:49.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.586 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=586132 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 586132 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 586132 ']' 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:49.586 [2024-07-15 09:21:36.131746] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:12:49.586 [2024-07-15 09:21:36.131801] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.586 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.586 [2024-07-15 09:21:36.206421] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:49.586 [2024-07-15 09:21:36.270925] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.586 [2024-07-15 09:21:36.270962] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.586 [2024-07-15 09:21:36.270969] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.586 [2024-07-15 09:21:36.270976] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.586 [2024-07-15 09:21:36.270982] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.586 [2024-07-15 09:21:36.272768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.586 [2024-07-15 09:21:36.272789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:49.586 [2024-07-15 09:21:36.397981] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:49.586 [2024-07-15 09:21:36.414140] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:49.586 NULL1 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:49.586 Delay0 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=586256 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:49.586 09:21:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:49.586 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.586 [2024-07-15 09:21:36.498757] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:51.532 09:21:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.532 09:21:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.532 09:21:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 starting I/O failed: -6 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 starting I/O failed: -6 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 starting I/O failed: -6 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 starting I/O failed: -6 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 starting I/O failed: -6 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 starting I/O failed: -6 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 starting I/O failed: -6 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 starting I/O failed: -6 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 starting I/O failed: -6 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 starting I/O failed: -6 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 starting I/O failed: -6 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 starting I/O failed: -6 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 [2024-07-15 09:21:38.746108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e37e90 is same with the state(5) to be set 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Write completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.804 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Write completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Write completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 starting I/O failed: -6 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Write completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 starting I/O failed: -6 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 starting I/O failed: -6 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 starting I/O failed: -6 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Write completed with error (sct=0, sc=8) 00:12:51.805 starting I/O failed: -6 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 starting I/O failed: -6 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Write completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 starting I/O failed: -6 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Write completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 starting I/O failed: -6 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Write completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 starting I/O failed: -6 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 [2024-07-15 09:21:38.748316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f97c4000c00 is same with the state(5) to be set 00:12:51.805 Write completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Write completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Write completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Write completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Write completed with error (sct=0, sc=8) 00:12:51.805 Write completed with error (sct=0, sc=8) 00:12:51.805 Write completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Write completed with error (sct=0, sc=8) 00:12:51.805 Write completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Write completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 Read completed with error (sct=0, sc=8) 00:12:51.805 [2024-07-15 09:21:38.748814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f97c400d450 is same with the state(5) to be set 00:12:52.746 [2024-07-15 09:21:39.721782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e16500 is same with the state(5) to be set 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 [2024-07-15 09:21:39.750651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e36d00 is same with the state(5) to be set 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 [2024-07-15 09:21:39.750749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e37cb0 is same with the state(5) to be set 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 [2024-07-15 09:21:39.751063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f97c400cfe0 is same with the state(5) to be set 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 Write completed with error (sct=0, sc=8) 00:12:52.746 Read completed with error (sct=0, sc=8) 00:12:52.746 [2024-07-15 09:21:39.751138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f97c400d760 is same with the state(5) to be set 00:12:52.746 Initializing NVMe Controllers 00:12:52.746 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:52.746 Controller IO queue size 128, less than required. 00:12:52.746 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:52.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:52.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:52.746 Initialization complete. Launching workers. 00:12:52.746 ======================================================== 00:12:52.746 Latency(us) 00:12:52.747 Device Information : IOPS MiB/s Average min max 00:12:52.747 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 175.61 0.09 882170.56 223.21 1008263.24 00:12:52.747 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.24 0.07 998322.66 501.81 2002561.08 00:12:52.747 ======================================================== 00:12:52.747 Total : 325.85 0.16 935724.66 223.21 2002561.08 00:12:52.747 00:12:52.747 [2024-07-15 09:21:39.751648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e16500 (9): Bad file descriptor 00:12:52.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:52.747 09:21:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.747 09:21:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:52.747 09:21:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 586256 00:12:52.747 09:21:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 586256 00:12:53.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (586256) - No such process 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 586256 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 586256 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 586256 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:53.319 [2024-07-15 09:21:40.283702] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=587134 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 587134 00:12:53.319 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:53.319 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.319 [2024-07-15 09:21:40.350175] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:53.889 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:53.889 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 587134 00:12:53.889 09:21:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:54.148 09:21:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:54.148 09:21:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 587134 00:12:54.148 09:21:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:54.720 09:21:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:54.720 09:21:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 587134 00:12:54.720 09:21:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:55.292 09:21:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:55.292 09:21:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 587134 00:12:55.292 09:21:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:55.861 09:21:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:55.861 09:21:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 587134 00:12:55.861 09:21:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:56.432 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:56.432 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 587134 00:12:56.432 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:56.432 Initializing NVMe Controllers 00:12:56.432 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:56.432 Controller IO queue size 128, less than required. 00:12:56.432 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:56.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:56.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:56.432 Initialization complete. Launching workers. 00:12:56.432 ======================================================== 00:12:56.432 Latency(us) 00:12:56.432 Device Information : IOPS MiB/s Average min max 00:12:56.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002205.10 1000155.97 1041003.96 00:12:56.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002645.09 1000184.77 1009298.27 00:12:56.432 ======================================================== 00:12:56.432 Total : 256.00 0.12 1002425.09 1000155.97 1041003.96 00:12:56.432 00:12:56.692 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:56.692 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 587134 00:12:56.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (587134) - No such process 00:12:56.692 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 587134 00:12:56.692 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:56.692 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:56.692 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:56.692 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:56.692 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:56.692 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:56.692 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:56.692 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:56.692 rmmod nvme_tcp 00:12:56.692 rmmod nvme_fabrics 00:12:56.692 rmmod nvme_keyring 00:12:56.953 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:56.953 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:56.953 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:56.953 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 586132 ']' 00:12:56.953 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 586132 00:12:56.953 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 586132 ']' 00:12:56.953 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 586132 00:12:56.953 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:12:56.953 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:56.953 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 586132 00:12:56.953 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:56.953 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:56.953 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 586132' 00:12:56.953 killing process with pid 586132 00:12:56.953 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 586132 00:12:56.953 09:21:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 586132 00:12:56.953 09:21:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:56.953 09:21:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:56.953 09:21:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:56.953 09:21:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:56.953 09:21:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:56.953 09:21:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.953 09:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.953 09:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.498 09:21:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:59.498 00:12:59.498 real 0m18.038s 00:12:59.499 user 0m29.994s 00:12:59.499 sys 0m6.688s 00:12:59.499 09:21:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:59.499 09:21:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.499 ************************************ 00:12:59.499 END TEST nvmf_delete_subsystem 00:12:59.499 ************************************ 00:12:59.499 09:21:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:59.499 09:21:46 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:59.499 09:21:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:59.499 09:21:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.499 09:21:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:59.499 ************************************ 00:12:59.499 START TEST nvmf_ns_masking 00:12:59.499 ************************************ 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:59.499 * Looking for test storage... 00:12:59.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3c4f7f9f-7434-4768-a3a6-52afd03bfc4d 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=257240a3-1482-4350-a354-1b7b5ba21d73 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c0742f64-44a3-4f9d-8b97-291a0c181a34 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:59.499 09:21:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:07.641 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:07.641 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:07.641 Found net devices under 0000:31:00.0: cvl_0_0 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:07.641 Found net devices under 0000:31:00.1: cvl_0_1 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:07.641 09:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:07.641 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:07.641 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:07.641 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:07.641 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:07.641 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:07.641 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:07.641 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:07.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:13:07.641 00:13:07.641 --- 10.0.0.2 ping statistics --- 00:13:07.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.641 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:13:07.641 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:07.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:13:07.641 00:13:07.641 --- 10.0.0.1 ping statistics --- 00:13:07.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.641 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:13:07.641 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=592497 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 592497 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 592497 ']' 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:07.642 09:21:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:07.642 [2024-07-15 09:21:54.308684] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:13:07.642 [2024-07-15 09:21:54.308748] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.642 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.642 [2024-07-15 09:21:54.386592] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.642 [2024-07-15 09:21:54.459984] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.642 [2024-07-15 09:21:54.460023] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.642 [2024-07-15 09:21:54.460031] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.642 [2024-07-15 09:21:54.460037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.642 [2024-07-15 09:21:54.460043] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.642 [2024-07-15 09:21:54.460061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.902 09:21:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:07.902 09:21:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:07.902 09:21:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:07.902 09:21:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:07.902 09:21:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:08.163 09:21:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.163 09:21:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:08.163 [2024-07-15 09:21:55.251219] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.163 09:21:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:08.163 09:21:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:08.163 09:21:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:08.422 Malloc1 00:13:08.422 09:21:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:08.422 Malloc2 00:13:08.683 09:21:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:08.683 09:21:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:08.943 09:21:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.943 [2024-07-15 09:21:56.097416] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.943 09:21:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:08.943 09:21:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c0742f64-44a3-4f9d-8b97-291a0c181a34 -a 10.0.0.2 -s 4420 -i 4 00:13:09.203 09:21:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:09.203 09:21:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:09.203 09:21:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.203 09:21:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:09.203 09:21:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:11.113 09:21:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:11.113 09:21:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:11.113 09:21:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.113 09:21:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:11.113 09:21:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.113 09:21:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:11.113 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:11.114 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:11.114 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:11.114 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:11.114 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:11.114 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:11.114 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:11.114 [ 0]:0x1 00:13:11.114 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:11.114 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:11.374 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8d043ffc43714f89b1de8557abafed46 00:13:11.374 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8d043ffc43714f89b1de8557abafed46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:11.374 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:11.374 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:11.374 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:11.374 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:11.374 [ 0]:0x1 00:13:11.374 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:11.374 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:11.374 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8d043ffc43714f89b1de8557abafed46 00:13:11.374 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8d043ffc43714f89b1de8557abafed46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:11.374 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:11.374 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:11.374 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:11.374 [ 1]:0x2 00:13:11.374 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:11.374 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:11.633 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=78538e2db8ba448c95d40522284f3e00 00:13:11.633 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 78538e2db8ba448c95d40522284f3e00 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:11.633 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:11.633 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.633 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.893 09:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:11.893 09:21:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:11.893 09:21:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c0742f64-44a3-4f9d-8b97-291a0c181a34 -a 10.0.0.2 -s 4420 -i 4 00:13:12.153 09:21:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:12.153 09:21:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:12.153 09:21:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.153 09:21:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:12.153 09:21:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:12.153 09:21:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:14.061 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:14.061 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:14.061 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.061 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:14.061 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.061 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:14.061 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:14.061 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:14.321 [ 0]:0x2 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=78538e2db8ba448c95d40522284f3e00 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 78538e2db8ba448c95d40522284f3e00 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:14.321 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:14.580 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:14.580 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:14.580 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:14.580 [ 0]:0x1 00:13:14.580 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:14.580 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:14.580 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8d043ffc43714f89b1de8557abafed46 00:13:14.580 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8d043ffc43714f89b1de8557abafed46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:14.580 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:14.580 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:14.580 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:14.580 [ 1]:0x2 00:13:14.580 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:14.580 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:14.580 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=78538e2db8ba448c95d40522284f3e00 00:13:14.580 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 78538e2db8ba448c95d40522284f3e00 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:14.580 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:14.841 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:14.841 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:14.841 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:14.841 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:14.841 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:14.841 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:14.841 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:14.841 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:14.842 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:14.842 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:14.842 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:14.842 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:14.842 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:14.842 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:14.842 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:14.842 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:14.842 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:14.842 09:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:14.842 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:14.842 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:14.842 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:14.842 [ 0]:0x2 00:13:14.842 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:14.842 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:14.842 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=78538e2db8ba448c95d40522284f3e00 00:13:14.842 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 78538e2db8ba448c95d40522284f3e00 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:14.842 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:14.842 09:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:15.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.101 09:22:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:15.101 09:22:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:15.101 09:22:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c0742f64-44a3-4f9d-8b97-291a0c181a34 -a 10.0.0.2 -s 4420 -i 4 00:13:15.101 09:22:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:15.101 09:22:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:15.101 09:22:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.101 09:22:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:15.101 09:22:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:15.101 09:22:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:17.643 [ 0]:0x1 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8d043ffc43714f89b1de8557abafed46 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8d043ffc43714f89b1de8557abafed46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:17.643 [ 1]:0x2 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=78538e2db8ba448c95d40522284f3e00 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 78538e2db8ba448c95d40522284f3e00 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:17.643 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:17.904 [ 0]:0x2 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=78538e2db8ba448c95d40522284f3e00 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 78538e2db8ba448c95d40522284f3e00 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:17.904 09:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:17.904 [2024-07-15 09:22:05.071114] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:17.904 request: 00:13:17.904 { 00:13:17.904 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.904 "nsid": 2, 00:13:17.904 "host": "nqn.2016-06.io.spdk:host1", 00:13:17.904 "method": "nvmf_ns_remove_host", 00:13:17.904 "req_id": 1 00:13:17.904 } 00:13:17.904 Got JSON-RPC error response 00:13:17.904 response: 00:13:17.904 { 00:13:17.904 "code": -32602, 00:13:17.904 "message": "Invalid parameters" 00:13:17.904 } 00:13:17.904 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:17.904 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:17.904 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:17.904 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:17.904 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:17.904 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:17.904 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:17.904 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:17.904 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:17.904 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:17.904 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:17.904 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:17.904 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.904 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:17.904 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:17.904 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:18.164 [ 0]:0x2 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=78538e2db8ba448c95d40522284f3e00 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 78538e2db8ba448c95d40522284f3e00 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=594694 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 594694 /var/tmp/host.sock 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 594694 ']' 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:18.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.164 09:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:18.165 [2024-07-15 09:22:05.292884] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:13:18.165 [2024-07-15 09:22:05.292932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid594694 ] 00:13:18.165 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.424 [2024-07-15 09:22:05.375140] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.424 [2024-07-15 09:22:05.439936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.995 09:22:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:18.995 09:22:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:18.995 09:22:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.995 09:22:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:19.256 09:22:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3c4f7f9f-7434-4768-a3a6-52afd03bfc4d 00:13:19.256 09:22:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:19.256 09:22:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3C4F7F9F74344768A3A652AFD03BFC4D -i 00:13:19.516 09:22:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 257240a3-1482-4350-a354-1b7b5ba21d73 00:13:19.516 09:22:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:19.516 09:22:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 257240A314824350A3541B7B5BA21D73 -i 00:13:19.516 09:22:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:19.783 09:22:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:20.044 09:22:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:20.044 09:22:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:20.306 nvme0n1 00:13:20.306 09:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:20.306 09:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:20.566 nvme1n2 00:13:20.566 09:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:20.566 09:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:20.566 09:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:20.567 09:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:20.567 09:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:20.827 09:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:20.827 09:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:20.827 09:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:20.827 09:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:21.091 09:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3c4f7f9f-7434-4768-a3a6-52afd03bfc4d == \3\c\4\f\7\f\9\f\-\7\4\3\4\-\4\7\6\8\-\a\3\a\6\-\5\2\a\f\d\0\3\b\f\c\4\d ]] 00:13:21.091 09:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:21.091 09:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:21.091 09:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:21.091 09:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 257240a3-1482-4350-a354-1b7b5ba21d73 == \2\5\7\2\4\0\a\3\-\1\4\8\2\-\4\3\5\0\-\a\3\5\4\-\1\b\7\b\5\b\a\2\1\d\7\3 ]] 00:13:21.091 09:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 594694 00:13:21.091 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 594694 ']' 00:13:21.091 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 594694 00:13:21.091 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:21.091 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:21.091 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 594694 00:13:21.091 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:21.091 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:21.091 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 594694' 00:13:21.091 killing process with pid 594694 00:13:21.091 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 594694 00:13:21.091 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 594694 00:13:21.434 09:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.712 09:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:13:21.712 09:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:13:21.712 09:22:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:21.712 09:22:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:13:21.712 09:22:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:21.712 09:22:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:13:21.712 09:22:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:21.712 09:22:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:21.712 rmmod nvme_tcp 00:13:21.712 rmmod nvme_fabrics 00:13:21.712 rmmod nvme_keyring 00:13:21.712 09:22:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:21.712 09:22:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:13:21.712 09:22:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:13:21.712 09:22:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 592497 ']' 00:13:21.712 09:22:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 592497 00:13:21.712 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 592497 ']' 00:13:21.713 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 592497 00:13:21.713 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:21.713 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:21.713 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 592497 00:13:21.713 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:21.713 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:21.713 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 592497' 00:13:21.713 killing process with pid 592497 00:13:21.713 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 592497 00:13:21.713 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 592497 00:13:21.973 09:22:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:21.973 09:22:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:21.973 09:22:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:21.973 09:22:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:21.973 09:22:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:21.973 09:22:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.973 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.973 09:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.888 09:22:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:23.888 00:13:23.888 real 0m24.748s 00:13:23.888 user 0m24.274s 00:13:23.888 sys 0m7.683s 00:13:23.888 09:22:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:23.888 09:22:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:23.888 ************************************ 00:13:23.888 END TEST nvmf_ns_masking 00:13:23.888 ************************************ 00:13:23.888 09:22:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:23.888 09:22:11 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:13:23.888 09:22:11 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:23.889 09:22:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:23.889 09:22:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:23.889 09:22:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:23.889 ************************************ 00:13:23.889 START TEST nvmf_nvme_cli 00:13:23.889 ************************************ 00:13:23.889 09:22:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:24.149 * Looking for test storage... 00:13:24.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:24.149 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:24.150 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:24.150 09:22:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:24.150 09:22:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:24.150 09:22:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:24.150 09:22:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:24.150 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:24.150 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.150 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:24.150 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:24.150 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:24.150 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.150 09:22:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.150 09:22:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.150 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:24.150 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:24.150 09:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:13:24.150 09:22:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:32.292 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:32.292 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.292 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:32.292 Found net devices under 0000:31:00.0: cvl_0_0 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:32.293 Found net devices under 0000:31:00.1: cvl_0_1 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:32.293 09:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:32.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:13:32.293 00:13:32.293 --- 10.0.0.2 ping statistics --- 00:13:32.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.293 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:32.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:13:32.293 00:13:32.293 --- 10.0.0.1 ping statistics --- 00:13:32.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.293 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=600071 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 600071 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 600071 ']' 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.293 09:22:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:32.293 [2024-07-15 09:22:19.234315] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:13:32.293 [2024-07-15 09:22:19.234364] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.293 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.293 [2024-07-15 09:22:19.307955] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:32.293 [2024-07-15 09:22:19.375093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.293 [2024-07-15 09:22:19.375129] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.293 [2024-07-15 09:22:19.375137] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.293 [2024-07-15 09:22:19.375143] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.293 [2024-07-15 09:22:19.375149] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.293 [2024-07-15 09:22:19.375289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.293 [2024-07-15 09:22:19.375410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.293 [2024-07-15 09:22:19.375570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.293 [2024-07-15 09:22:19.375571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.865 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.865 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:13:32.865 09:22:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:32.865 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:32.865 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:32.865 09:22:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.865 09:22:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:32.865 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.865 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:32.865 [2024-07-15 09:22:20.057448] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.126 Malloc0 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.126 Malloc1 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.126 [2024-07-15 09:22:20.148305] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:13:33.126 00:13:33.126 Discovery Log Number of Records 2, Generation counter 2 00:13:33.126 =====Discovery Log Entry 0====== 00:13:33.126 trtype: tcp 00:13:33.126 adrfam: ipv4 00:13:33.126 subtype: current discovery subsystem 00:13:33.126 treq: not required 00:13:33.126 portid: 0 00:13:33.126 trsvcid: 4420 00:13:33.126 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:33.126 traddr: 10.0.0.2 00:13:33.126 eflags: explicit discovery connections, duplicate discovery information 00:13:33.126 sectype: none 00:13:33.126 =====Discovery Log Entry 1====== 00:13:33.126 trtype: tcp 00:13:33.126 adrfam: ipv4 00:13:33.126 subtype: nvme subsystem 00:13:33.126 treq: not required 00:13:33.126 portid: 0 00:13:33.126 trsvcid: 4420 00:13:33.126 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:33.126 traddr: 10.0.0.2 00:13:33.126 eflags: none 00:13:33.126 sectype: none 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:33.126 09:22:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:35.036 09:22:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:35.036 09:22:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:13:35.036 09:22:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.036 09:22:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:35.036 09:22:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:35.036 09:22:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:13:36.947 09:22:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:36.948 /dev/nvme0n1 ]] 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.948 09:22:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:36.948 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:36.948 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.948 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:36.948 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.948 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:36.948 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:36.948 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.948 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:36.948 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:36.948 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.948 09:22:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:36.948 09:22:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:37.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:37.209 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:37.209 rmmod nvme_tcp 00:13:37.209 rmmod nvme_fabrics 00:13:37.470 rmmod nvme_keyring 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 600071 ']' 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 600071 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 600071 ']' 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 600071 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 600071 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 600071' 00:13:37.470 killing process with pid 600071 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 600071 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 600071 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.470 09:22:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.043 09:22:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:40.043 00:13:40.043 real 0m15.653s 00:13:40.043 user 0m23.424s 00:13:40.043 sys 0m6.381s 00:13:40.043 09:22:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:40.043 09:22:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:40.043 ************************************ 00:13:40.043 END TEST nvmf_nvme_cli 00:13:40.043 ************************************ 00:13:40.044 09:22:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:40.044 09:22:26 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:13:40.044 09:22:26 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:40.044 09:22:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:40.044 09:22:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:40.044 09:22:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:40.044 ************************************ 00:13:40.044 START TEST nvmf_vfio_user 00:13:40.044 ************************************ 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:40.044 * Looking for test storage... 00:13:40.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=601862 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 601862' 00:13:40.044 Process pid: 601862 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 601862 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 601862 ']' 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:40.044 09:22:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:40.044 [2024-07-15 09:22:27.006498] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:13:40.044 [2024-07-15 09:22:27.006570] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.044 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.044 [2024-07-15 09:22:27.079352] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.044 [2024-07-15 09:22:27.154973] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.044 [2024-07-15 09:22:27.155012] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.044 [2024-07-15 09:22:27.155020] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.044 [2024-07-15 09:22:27.155027] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.044 [2024-07-15 09:22:27.155032] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.044 [2024-07-15 09:22:27.155167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.044 [2024-07-15 09:22:27.155293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.044 [2024-07-15 09:22:27.155448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.044 [2024-07-15 09:22:27.155448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.615 09:22:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.615 09:22:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:40.615 09:22:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:41.997 09:22:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:41.997 09:22:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:41.997 09:22:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:41.997 09:22:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:41.997 09:22:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:41.997 09:22:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:41.997 Malloc1 00:13:41.997 09:22:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:42.257 09:22:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:42.516 09:22:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:42.516 09:22:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:42.516 09:22:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:42.516 09:22:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:42.775 Malloc2 00:13:42.775 09:22:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:43.034 09:22:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:43.034 09:22:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:43.297 09:22:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:43.297 09:22:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:43.297 09:22:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:43.297 09:22:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:43.297 09:22:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:43.297 09:22:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:43.297 [2024-07-15 09:22:30.379554] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:13:43.297 [2024-07-15 09:22:30.379598] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602547 ] 00:13:43.297 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.297 [2024-07-15 09:22:30.410384] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:43.297 [2024-07-15 09:22:30.423663] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:43.297 [2024-07-15 09:22:30.423684] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd81d0b0000 00:13:43.297 [2024-07-15 09:22:30.424660] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:43.297 [2024-07-15 09:22:30.425659] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:43.297 [2024-07-15 09:22:30.426672] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:43.297 [2024-07-15 09:22:30.427674] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:43.297 [2024-07-15 09:22:30.428683] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:43.297 [2024-07-15 09:22:30.429692] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:43.297 [2024-07-15 09:22:30.430693] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:43.297 [2024-07-15 09:22:30.431717] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:43.297 [2024-07-15 09:22:30.432713] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:43.297 [2024-07-15 09:22:30.432722] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd81d0a5000 00:13:43.297 [2024-07-15 09:22:30.434052] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:43.297 [2024-07-15 09:22:30.450979] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:43.297 [2024-07-15 09:22:30.451007] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:43.297 [2024-07-15 09:22:30.455846] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:43.297 [2024-07-15 09:22:30.455891] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:43.297 [2024-07-15 09:22:30.455982] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:43.297 [2024-07-15 09:22:30.456000] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:43.297 [2024-07-15 09:22:30.456006] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:43.297 [2024-07-15 09:22:30.456844] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:43.297 [2024-07-15 09:22:30.456853] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:43.297 [2024-07-15 09:22:30.456860] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:43.297 [2024-07-15 09:22:30.457843] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:43.297 [2024-07-15 09:22:30.457852] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:43.297 [2024-07-15 09:22:30.457859] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:43.297 [2024-07-15 09:22:30.458851] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:43.297 [2024-07-15 09:22:30.458859] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:43.297 [2024-07-15 09:22:30.459854] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:43.297 [2024-07-15 09:22:30.459862] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:43.297 [2024-07-15 09:22:30.459867] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:43.297 [2024-07-15 09:22:30.459874] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:43.297 [2024-07-15 09:22:30.459979] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:43.297 [2024-07-15 09:22:30.459984] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:43.297 [2024-07-15 09:22:30.459989] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:43.297 [2024-07-15 09:22:30.460859] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:43.297 [2024-07-15 09:22:30.461864] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:43.297 [2024-07-15 09:22:30.462864] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:43.297 [2024-07-15 09:22:30.463861] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:43.297 [2024-07-15 09:22:30.463917] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:43.297 [2024-07-15 09:22:30.464872] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:43.297 [2024-07-15 09:22:30.464880] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:43.297 [2024-07-15 09:22:30.464887] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:43.297 [2024-07-15 09:22:30.464908] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:43.297 [2024-07-15 09:22:30.464915] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:43.297 [2024-07-15 09:22:30.464930] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:43.297 [2024-07-15 09:22:30.464935] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:43.297 [2024-07-15 09:22:30.464949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:43.297 [2024-07-15 09:22:30.464985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:43.297 [2024-07-15 09:22:30.464994] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:43.297 [2024-07-15 09:22:30.465001] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:43.297 [2024-07-15 09:22:30.465005] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:43.297 [2024-07-15 09:22:30.465010] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:43.297 [2024-07-15 09:22:30.465015] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:43.297 [2024-07-15 09:22:30.465020] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:43.297 [2024-07-15 09:22:30.465024] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:43.297 [2024-07-15 09:22:30.465032] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:43.297 [2024-07-15 09:22:30.465042] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:43.297 [2024-07-15 09:22:30.465053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:43.297 [2024-07-15 09:22:30.465066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:43.297 [2024-07-15 09:22:30.465075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:43.297 [2024-07-15 09:22:30.465083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:43.297 [2024-07-15 09:22:30.465091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:43.297 [2024-07-15 09:22:30.465096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:43.297 [2024-07-15 09:22:30.465104] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:43.297 [2024-07-15 09:22:30.465113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:43.298 [2024-07-15 09:22:30.465123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:43.298 [2024-07-15 09:22:30.465132] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:43.298 [2024-07-15 09:22:30.465137] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:43.298 [2024-07-15 09:22:30.465144] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:43.298 [2024-07-15 09:22:30.465150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:43.298 [2024-07-15 09:22:30.465159] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:43.298 [2024-07-15 09:22:30.465170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:43.298 [2024-07-15 09:22:30.465230] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:43.298 [2024-07-15 09:22:30.465237] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:43.298 [2024-07-15 09:22:30.465245] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:43.298 [2024-07-15 09:22:30.465249] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:43.298 [2024-07-15 09:22:30.465255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:43.298 [2024-07-15 09:22:30.465265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:43.298 [2024-07-15 09:22:30.465274] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:43.298 [2024-07-15 09:22:30.465286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:43.298 [2024-07-15 09:22:30.465294] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:43.298 [2024-07-15 09:22:30.465301] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:43.298 [2024-07-15 09:22:30.465305] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:43.298 [2024-07-15 09:22:30.465311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:43.298 [2024-07-15 09:22:30.465328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:43.298 [2024-07-15 09:22:30.465341] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:43.298 [2024-07-15 09:22:30.465349] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:43.298 [2024-07-15 09:22:30.465356] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:43.298 [2024-07-15 09:22:30.465360] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:43.298 [2024-07-15 09:22:30.465366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:43.298 [2024-07-15 09:22:30.465375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:43.298 [2024-07-15 09:22:30.465386] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:43.298 [2024-07-15 09:22:30.465392] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:43.298 [2024-07-15 09:22:30.465400] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:43.298 [2024-07-15 09:22:30.465405] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:43.298 [2024-07-15 09:22:30.465410] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:43.298 [2024-07-15 09:22:30.465416] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:43.298 [2024-07-15 09:22:30.465421] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:43.298 [2024-07-15 09:22:30.465425] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:43.298 [2024-07-15 09:22:30.465430] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:43.298 [2024-07-15 09:22:30.465448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:43.298 [2024-07-15 09:22:30.465458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:43.298 [2024-07-15 09:22:30.465470] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:43.298 [2024-07-15 09:22:30.465482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:43.298 [2024-07-15 09:22:30.465492] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:43.298 [2024-07-15 09:22:30.465499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:43.298 [2024-07-15 09:22:30.465510] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:43.298 [2024-07-15 09:22:30.465517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:43.298 [2024-07-15 09:22:30.465530] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:43.298 [2024-07-15 09:22:30.465534] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:43.298 [2024-07-15 09:22:30.465538] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:43.298 [2024-07-15 09:22:30.465541] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:43.298 [2024-07-15 09:22:30.465547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:43.298 [2024-07-15 09:22:30.465555] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:43.298 [2024-07-15 09:22:30.465559] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:43.298 [2024-07-15 09:22:30.465565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:43.298 [2024-07-15 09:22:30.465572] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:43.298 [2024-07-15 09:22:30.465577] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:43.298 [2024-07-15 09:22:30.465584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:43.298 [2024-07-15 09:22:30.465592] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:43.298 [2024-07-15 09:22:30.465596] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:43.298 [2024-07-15 09:22:30.465602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:43.298 [2024-07-15 09:22:30.465609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:43.298 [2024-07-15 09:22:30.465621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:43.298 [2024-07-15 09:22:30.465631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:43.298 [2024-07-15 09:22:30.465638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:43.298 ===================================================== 00:13:43.298 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:43.298 ===================================================== 00:13:43.298 Controller Capabilities/Features 00:13:43.298 ================================ 00:13:43.298 Vendor ID: 4e58 00:13:43.298 Subsystem Vendor ID: 4e58 00:13:43.298 Serial Number: SPDK1 00:13:43.298 Model Number: SPDK bdev Controller 00:13:43.298 Firmware Version: 24.09 00:13:43.298 Recommended Arb Burst: 6 00:13:43.298 IEEE OUI Identifier: 8d 6b 50 00:13:43.298 Multi-path I/O 00:13:43.298 May have multiple subsystem ports: Yes 00:13:43.298 May have multiple controllers: Yes 00:13:43.298 Associated with SR-IOV VF: No 00:13:43.298 Max Data Transfer Size: 131072 00:13:43.298 Max Number of Namespaces: 32 00:13:43.298 Max Number of I/O Queues: 127 00:13:43.298 NVMe Specification Version (VS): 1.3 00:13:43.298 NVMe Specification Version (Identify): 1.3 00:13:43.298 Maximum Queue Entries: 256 00:13:43.298 Contiguous Queues Required: Yes 00:13:43.298 Arbitration Mechanisms Supported 00:13:43.298 Weighted Round Robin: Not Supported 00:13:43.298 Vendor Specific: Not Supported 00:13:43.298 Reset Timeout: 15000 ms 00:13:43.298 Doorbell Stride: 4 bytes 00:13:43.298 NVM Subsystem Reset: Not Supported 00:13:43.298 Command Sets Supported 00:13:43.298 NVM Command Set: Supported 00:13:43.298 Boot Partition: Not Supported 00:13:43.298 Memory Page Size Minimum: 4096 bytes 00:13:43.298 Memory Page Size Maximum: 4096 bytes 00:13:43.298 Persistent Memory Region: Not Supported 00:13:43.298 Optional Asynchronous Events Supported 00:13:43.298 Namespace Attribute Notices: Supported 00:13:43.298 Firmware Activation Notices: Not Supported 00:13:43.298 ANA Change Notices: Not Supported 00:13:43.298 PLE Aggregate Log Change Notices: Not Supported 00:13:43.298 LBA Status Info Alert Notices: Not Supported 00:13:43.298 EGE Aggregate Log Change Notices: Not Supported 00:13:43.298 Normal NVM Subsystem Shutdown event: Not Supported 00:13:43.298 Zone Descriptor Change Notices: Not Supported 00:13:43.298 Discovery Log Change Notices: Not Supported 00:13:43.298 Controller Attributes 00:13:43.298 128-bit Host Identifier: Supported 00:13:43.298 Non-Operational Permissive Mode: Not Supported 00:13:43.298 NVM Sets: Not Supported 00:13:43.298 Read Recovery Levels: Not Supported 00:13:43.298 Endurance Groups: Not Supported 00:13:43.298 Predictable Latency Mode: Not Supported 00:13:43.298 Traffic Based Keep ALive: Not Supported 00:13:43.298 Namespace Granularity: Not Supported 00:13:43.298 SQ Associations: Not Supported 00:13:43.298 UUID List: Not Supported 00:13:43.298 Multi-Domain Subsystem: Not Supported 00:13:43.298 Fixed Capacity Management: Not Supported 00:13:43.298 Variable Capacity Management: Not Supported 00:13:43.298 Delete Endurance Group: Not Supported 00:13:43.298 Delete NVM Set: Not Supported 00:13:43.298 Extended LBA Formats Supported: Not Supported 00:13:43.299 Flexible Data Placement Supported: Not Supported 00:13:43.299 00:13:43.299 Controller Memory Buffer Support 00:13:43.299 ================================ 00:13:43.299 Supported: No 00:13:43.299 00:13:43.299 Persistent Memory Region Support 00:13:43.299 ================================ 00:13:43.299 Supported: No 00:13:43.299 00:13:43.299 Admin Command Set Attributes 00:13:43.299 ============================ 00:13:43.299 Security Send/Receive: Not Supported 00:13:43.299 Format NVM: Not Supported 00:13:43.299 Firmware Activate/Download: Not Supported 00:13:43.299 Namespace Management: Not Supported 00:13:43.299 Device Self-Test: Not Supported 00:13:43.299 Directives: Not Supported 00:13:43.299 NVMe-MI: Not Supported 00:13:43.299 Virtualization Management: Not Supported 00:13:43.299 Doorbell Buffer Config: Not Supported 00:13:43.299 Get LBA Status Capability: Not Supported 00:13:43.299 Command & Feature Lockdown Capability: Not Supported 00:13:43.299 Abort Command Limit: 4 00:13:43.299 Async Event Request Limit: 4 00:13:43.299 Number of Firmware Slots: N/A 00:13:43.299 Firmware Slot 1 Read-Only: N/A 00:13:43.299 Firmware Activation Without Reset: N/A 00:13:43.299 Multiple Update Detection Support: N/A 00:13:43.299 Firmware Update Granularity: No Information Provided 00:13:43.299 Per-Namespace SMART Log: No 00:13:43.299 Asymmetric Namespace Access Log Page: Not Supported 00:13:43.299 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:43.299 Command Effects Log Page: Supported 00:13:43.299 Get Log Page Extended Data: Supported 00:13:43.299 Telemetry Log Pages: Not Supported 00:13:43.299 Persistent Event Log Pages: Not Supported 00:13:43.299 Supported Log Pages Log Page: May Support 00:13:43.299 Commands Supported & Effects Log Page: Not Supported 00:13:43.299 Feature Identifiers & Effects Log Page:May Support 00:13:43.299 NVMe-MI Commands & Effects Log Page: May Support 00:13:43.299 Data Area 4 for Telemetry Log: Not Supported 00:13:43.299 Error Log Page Entries Supported: 128 00:13:43.299 Keep Alive: Supported 00:13:43.299 Keep Alive Granularity: 10000 ms 00:13:43.299 00:13:43.299 NVM Command Set Attributes 00:13:43.299 ========================== 00:13:43.299 Submission Queue Entry Size 00:13:43.299 Max: 64 00:13:43.299 Min: 64 00:13:43.299 Completion Queue Entry Size 00:13:43.299 Max: 16 00:13:43.299 Min: 16 00:13:43.299 Number of Namespaces: 32 00:13:43.299 Compare Command: Supported 00:13:43.299 Write Uncorrectable Command: Not Supported 00:13:43.299 Dataset Management Command: Supported 00:13:43.299 Write Zeroes Command: Supported 00:13:43.299 Set Features Save Field: Not Supported 00:13:43.299 Reservations: Not Supported 00:13:43.299 Timestamp: Not Supported 00:13:43.299 Copy: Supported 00:13:43.299 Volatile Write Cache: Present 00:13:43.299 Atomic Write Unit (Normal): 1 00:13:43.299 Atomic Write Unit (PFail): 1 00:13:43.299 Atomic Compare & Write Unit: 1 00:13:43.299 Fused Compare & Write: Supported 00:13:43.299 Scatter-Gather List 00:13:43.299 SGL Command Set: Supported (Dword aligned) 00:13:43.299 SGL Keyed: Not Supported 00:13:43.299 SGL Bit Bucket Descriptor: Not Supported 00:13:43.299 SGL Metadata Pointer: Not Supported 00:13:43.299 Oversized SGL: Not Supported 00:13:43.299 SGL Metadata Address: Not Supported 00:13:43.299 SGL Offset: Not Supported 00:13:43.299 Transport SGL Data Block: Not Supported 00:13:43.299 Replay Protected Memory Block: Not Supported 00:13:43.299 00:13:43.299 Firmware Slot Information 00:13:43.299 ========================= 00:13:43.299 Active slot: 1 00:13:43.299 Slot 1 Firmware Revision: 24.09 00:13:43.299 00:13:43.299 00:13:43.299 Commands Supported and Effects 00:13:43.299 ============================== 00:13:43.299 Admin Commands 00:13:43.299 -------------- 00:13:43.299 Get Log Page (02h): Supported 00:13:43.299 Identify (06h): Supported 00:13:43.299 Abort (08h): Supported 00:13:43.299 Set Features (09h): Supported 00:13:43.299 Get Features (0Ah): Supported 00:13:43.299 Asynchronous Event Request (0Ch): Supported 00:13:43.299 Keep Alive (18h): Supported 00:13:43.299 I/O Commands 00:13:43.299 ------------ 00:13:43.299 Flush (00h): Supported LBA-Change 00:13:43.299 Write (01h): Supported LBA-Change 00:13:43.299 Read (02h): Supported 00:13:43.299 Compare (05h): Supported 00:13:43.299 Write Zeroes (08h): Supported LBA-Change 00:13:43.299 Dataset Management (09h): Supported LBA-Change 00:13:43.299 Copy (19h): Supported LBA-Change 00:13:43.299 00:13:43.299 Error Log 00:13:43.299 ========= 00:13:43.299 00:13:43.299 Arbitration 00:13:43.299 =========== 00:13:43.299 Arbitration Burst: 1 00:13:43.299 00:13:43.299 Power Management 00:13:43.299 ================ 00:13:43.299 Number of Power States: 1 00:13:43.299 Current Power State: Power State #0 00:13:43.299 Power State #0: 00:13:43.299 Max Power: 0.00 W 00:13:43.299 Non-Operational State: Operational 00:13:43.299 Entry Latency: Not Reported 00:13:43.299 Exit Latency: Not Reported 00:13:43.299 Relative Read Throughput: 0 00:13:43.299 Relative Read Latency: 0 00:13:43.299 Relative Write Throughput: 0 00:13:43.299 Relative Write Latency: 0 00:13:43.299 Idle Power: Not Reported 00:13:43.299 Active Power: Not Reported 00:13:43.299 Non-Operational Permissive Mode: Not Supported 00:13:43.299 00:13:43.299 Health Information 00:13:43.299 ================== 00:13:43.299 Critical Warnings: 00:13:43.299 Available Spare Space: OK 00:13:43.299 Temperature: OK 00:13:43.299 Device Reliability: OK 00:13:43.299 Read Only: No 00:13:43.299 Volatile Memory Backup: OK 00:13:43.299 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:43.299 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:43.299 Available Spare: 0% 00:13:43.299 Available Sp[2024-07-15 09:22:30.465742] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:43.299 [2024-07-15 09:22:30.465755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:43.299 [2024-07-15 09:22:30.465785] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:43.299 [2024-07-15 09:22:30.465795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.299 [2024-07-15 09:22:30.465801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.299 [2024-07-15 09:22:30.465807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.299 [2024-07-15 09:22:30.465814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.299 [2024-07-15 09:22:30.465875] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:43.299 [2024-07-15 09:22:30.465886] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:43.299 [2024-07-15 09:22:30.466880] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:43.299 [2024-07-15 09:22:30.466920] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:43.299 [2024-07-15 09:22:30.466926] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:43.299 [2024-07-15 09:22:30.467884] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:43.299 [2024-07-15 09:22:30.467895] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:43.299 [2024-07-15 09:22:30.467955] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:43.299 [2024-07-15 09:22:30.471760] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:43.561 are Threshold: 0% 00:13:43.561 Life Percentage Used: 0% 00:13:43.561 Data Units Read: 0 00:13:43.561 Data Units Written: 0 00:13:43.561 Host Read Commands: 0 00:13:43.561 Host Write Commands: 0 00:13:43.561 Controller Busy Time: 0 minutes 00:13:43.561 Power Cycles: 0 00:13:43.561 Power On Hours: 0 hours 00:13:43.561 Unsafe Shutdowns: 0 00:13:43.561 Unrecoverable Media Errors: 0 00:13:43.561 Lifetime Error Log Entries: 0 00:13:43.561 Warning Temperature Time: 0 minutes 00:13:43.561 Critical Temperature Time: 0 minutes 00:13:43.561 00:13:43.561 Number of Queues 00:13:43.561 ================ 00:13:43.561 Number of I/O Submission Queues: 127 00:13:43.561 Number of I/O Completion Queues: 127 00:13:43.561 00:13:43.561 Active Namespaces 00:13:43.561 ================= 00:13:43.561 Namespace ID:1 00:13:43.561 Error Recovery Timeout: Unlimited 00:13:43.561 Command Set Identifier: NVM (00h) 00:13:43.561 Deallocate: Supported 00:13:43.561 Deallocated/Unwritten Error: Not Supported 00:13:43.561 Deallocated Read Value: Unknown 00:13:43.561 Deallocate in Write Zeroes: Not Supported 00:13:43.561 Deallocated Guard Field: 0xFFFF 00:13:43.561 Flush: Supported 00:13:43.561 Reservation: Supported 00:13:43.561 Namespace Sharing Capabilities: Multiple Controllers 00:13:43.561 Size (in LBAs): 131072 (0GiB) 00:13:43.561 Capacity (in LBAs): 131072 (0GiB) 00:13:43.561 Utilization (in LBAs): 131072 (0GiB) 00:13:43.561 NGUID: AAC0C051E20D4B2C9F570F006938CBDB 00:13:43.561 UUID: aac0c051-e20d-4b2c-9f57-0f006938cbdb 00:13:43.561 Thin Provisioning: Not Supported 00:13:43.561 Per-NS Atomic Units: Yes 00:13:43.561 Atomic Boundary Size (Normal): 0 00:13:43.561 Atomic Boundary Size (PFail): 0 00:13:43.561 Atomic Boundary Offset: 0 00:13:43.561 Maximum Single Source Range Length: 65535 00:13:43.561 Maximum Copy Length: 65535 00:13:43.561 Maximum Source Range Count: 1 00:13:43.561 NGUID/EUI64 Never Reused: No 00:13:43.561 Namespace Write Protected: No 00:13:43.561 Number of LBA Formats: 1 00:13:43.561 Current LBA Format: LBA Format #00 00:13:43.561 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:43.561 00:13:43.561 09:22:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:43.561 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.561 [2024-07-15 09:22:30.659395] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:48.853 Initializing NVMe Controllers 00:13:48.853 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:48.853 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:48.853 Initialization complete. Launching workers. 00:13:48.853 ======================================================== 00:13:48.853 Latency(us) 00:13:48.853 Device Information : IOPS MiB/s Average min max 00:13:48.853 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40004.60 156.27 3199.49 827.77 6850.90 00:13:48.853 ======================================================== 00:13:48.853 Total : 40004.60 156.27 3199.49 827.77 6850.90 00:13:48.853 00:13:48.853 [2024-07-15 09:22:35.678991] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:48.853 09:22:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:48.853 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.853 [2024-07-15 09:22:35.861928] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:54.139 Initializing NVMe Controllers 00:13:54.139 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:54.139 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:54.139 Initialization complete. Launching workers. 00:13:54.139 ======================================================== 00:13:54.139 Latency(us) 00:13:54.139 Device Information : IOPS MiB/s Average min max 00:13:54.139 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16055.96 62.72 7977.68 5984.98 8978.74 00:13:54.139 ======================================================== 00:13:54.139 Total : 16055.96 62.72 7977.68 5984.98 8978.74 00:13:54.139 00:13:54.139 [2024-07-15 09:22:40.903525] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:54.139 09:22:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:54.139 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.139 [2024-07-15 09:22:41.103422] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:59.493 [2024-07-15 09:22:46.207057] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:59.493 Initializing NVMe Controllers 00:13:59.493 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:59.493 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:59.493 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:59.493 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:59.493 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:59.493 Initialization complete. Launching workers. 00:13:59.493 Starting thread on core 2 00:13:59.493 Starting thread on core 3 00:13:59.493 Starting thread on core 1 00:13:59.493 09:22:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:59.493 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.493 [2024-07-15 09:22:46.481163] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:02.815 [2024-07-15 09:22:49.537337] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:02.815 Initializing NVMe Controllers 00:14:02.815 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:02.815 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:02.815 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:02.815 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:02.815 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:02.815 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:02.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:02.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:02.815 Initialization complete. Launching workers. 00:14:02.815 Starting thread on core 1 with urgent priority queue 00:14:02.815 Starting thread on core 2 with urgent priority queue 00:14:02.815 Starting thread on core 3 with urgent priority queue 00:14:02.815 Starting thread on core 0 with urgent priority queue 00:14:02.815 SPDK bdev Controller (SPDK1 ) core 0: 8329.67 IO/s 12.01 secs/100000 ios 00:14:02.815 SPDK bdev Controller (SPDK1 ) core 1: 15772.33 IO/s 6.34 secs/100000 ios 00:14:02.815 SPDK bdev Controller (SPDK1 ) core 2: 8640.00 IO/s 11.57 secs/100000 ios 00:14:02.815 SPDK bdev Controller (SPDK1 ) core 3: 12166.67 IO/s 8.22 secs/100000 ios 00:14:02.815 ======================================================== 00:14:02.815 00:14:02.815 09:22:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:02.815 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.815 [2024-07-15 09:22:49.811205] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:02.815 Initializing NVMe Controllers 00:14:02.815 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:02.815 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:02.815 Namespace ID: 1 size: 0GB 00:14:02.815 Initialization complete. 00:14:02.815 INFO: using host memory buffer for IO 00:14:02.815 Hello world! 00:14:02.815 [2024-07-15 09:22:49.845376] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:02.815 09:22:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:02.815 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.075 [2024-07-15 09:22:50.119234] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:04.017 Initializing NVMe Controllers 00:14:04.018 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:04.018 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:04.018 Initialization complete. Launching workers. 00:14:04.018 submit (in ns) avg, min, max = 7979.2, 3896.7, 4000153.3 00:14:04.018 complete (in ns) avg, min, max = 17964.2, 2384.2, 3998754.2 00:14:04.018 00:14:04.018 Submit histogram 00:14:04.018 ================ 00:14:04.018 Range in us Cumulative Count 00:14:04.018 3.893 - 3.920: 0.7690% ( 149) 00:14:04.018 3.920 - 3.947: 5.8523% ( 985) 00:14:04.018 3.947 - 3.973: 13.6399% ( 1509) 00:14:04.018 3.973 - 4.000: 24.4362% ( 2092) 00:14:04.018 4.000 - 4.027: 36.0427% ( 2249) 00:14:04.018 4.027 - 4.053: 50.4516% ( 2792) 00:14:04.018 4.053 - 4.080: 66.3261% ( 3076) 00:14:04.018 4.080 - 4.107: 80.9155% ( 2827) 00:14:04.018 4.107 - 4.133: 90.6229% ( 1881) 00:14:04.018 4.133 - 4.160: 96.1604% ( 1073) 00:14:04.018 4.160 - 4.187: 98.1886% ( 393) 00:14:04.018 4.187 - 4.213: 99.1123% ( 179) 00:14:04.018 4.213 - 4.240: 99.3291% ( 42) 00:14:04.018 4.240 - 4.267: 99.4065% ( 15) 00:14:04.018 4.267 - 4.293: 99.4426% ( 7) 00:14:04.018 4.293 - 4.320: 99.4478% ( 1) 00:14:04.018 4.320 - 4.347: 99.4530% ( 1) 00:14:04.018 4.347 - 4.373: 99.4581% ( 1) 00:14:04.018 4.400 - 4.427: 99.4633% ( 1) 00:14:04.018 4.427 - 4.453: 99.4736% ( 2) 00:14:04.018 4.507 - 4.533: 99.4788% ( 1) 00:14:04.018 4.560 - 4.587: 99.4839% ( 1) 00:14:04.018 4.693 - 4.720: 99.4891% ( 1) 00:14:04.018 4.773 - 4.800: 99.4942% ( 1) 00:14:04.018 4.853 - 4.880: 99.4994% ( 1) 00:14:04.018 5.013 - 5.040: 99.5046% ( 1) 00:14:04.018 5.600 - 5.627: 99.5149% ( 2) 00:14:04.018 5.707 - 5.733: 99.5252% ( 2) 00:14:04.018 5.920 - 5.947: 99.5304% ( 1) 00:14:04.018 5.973 - 6.000: 99.5355% ( 1) 00:14:04.018 6.000 - 6.027: 99.5407% ( 1) 00:14:04.018 6.053 - 6.080: 99.5459% ( 1) 00:14:04.018 6.107 - 6.133: 99.5510% ( 1) 00:14:04.018 6.133 - 6.160: 99.5562% ( 1) 00:14:04.018 6.240 - 6.267: 99.5613% ( 1) 00:14:04.018 6.267 - 6.293: 99.5665% ( 1) 00:14:04.018 6.293 - 6.320: 99.5717% ( 1) 00:14:04.018 6.373 - 6.400: 99.5768% ( 1) 00:14:04.018 6.400 - 6.427: 99.5820% ( 1) 00:14:04.018 6.507 - 6.533: 99.5871% ( 1) 00:14:04.018 6.640 - 6.667: 99.5923% ( 1) 00:14:04.018 6.667 - 6.693: 99.5975% ( 1) 00:14:04.018 6.693 - 6.720: 99.6026% ( 1) 00:14:04.018 6.827 - 6.880: 99.6078% ( 1) 00:14:04.018 6.880 - 6.933: 99.6129% ( 1) 00:14:04.018 6.987 - 7.040: 99.6284% ( 3) 00:14:04.018 7.040 - 7.093: 99.6491% ( 4) 00:14:04.018 7.093 - 7.147: 99.6646% ( 3) 00:14:04.018 7.147 - 7.200: 99.6749% ( 2) 00:14:04.018 7.200 - 7.253: 99.6852% ( 2) 00:14:04.018 7.253 - 7.307: 99.6955% ( 2) 00:14:04.018 7.360 - 7.413: 99.7316% ( 7) 00:14:04.018 7.413 - 7.467: 99.7368% ( 1) 00:14:04.018 7.467 - 7.520: 99.7420% ( 1) 00:14:04.018 7.520 - 7.573: 99.7471% ( 1) 00:14:04.018 7.573 - 7.627: 99.7523% ( 1) 00:14:04.018 7.627 - 7.680: 99.7678% ( 3) 00:14:04.018 7.680 - 7.733: 99.7832% ( 3) 00:14:04.018 7.787 - 7.840: 99.8091% ( 5) 00:14:04.018 7.840 - 7.893: 99.8194% ( 2) 00:14:04.018 7.893 - 7.947: 99.8245% ( 1) 00:14:04.018 7.947 - 8.000: 99.8349% ( 2) 00:14:04.018 8.160 - 8.213: 99.8400% ( 1) 00:14:04.018 8.213 - 8.267: 99.8503% ( 2) 00:14:04.018 8.267 - 8.320: 99.8607% ( 2) 00:14:04.018 8.320 - 8.373: 99.8710% ( 2) 00:14:04.018 8.480 - 8.533: 99.8761% ( 1) 00:14:04.018 8.587 - 8.640: 99.8813% ( 1) 00:14:04.018 9.173 - 9.227: 99.8865% ( 1) 00:14:04.018 9.280 - 9.333: 99.8916% ( 1) 00:14:04.018 13.387 - 13.440: 99.8968% ( 1) 00:14:04.018 14.613 - 14.720: 99.9019% ( 1) 00:14:04.018 3986.773 - 4014.080: 100.0000% ( 19) 00:14:04.018 00:14:04.018 Complete histogram 00:14:04.018 ================== 00:14:04.018 Range in us Cumulative Count 00:14:04.018 2.373 - 2.387: 0.0052% ( 1) 00:14:04.018 2.387 - 2.400: 0.0103% ( 1) 00:14:04.018 2.400 - 2.413: 0.1135% ( 20) 00:14:04.018 2.413 - 2.427: 1.0941% ( 190) 00:14:04.018 2.427 - [2024-07-15 09:22:51.139648] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:04.018 2.440: 1.1560% ( 12) 00:14:04.018 2.440 - 2.453: 1.2695% ( 22) 00:14:04.018 2.453 - 2.467: 1.2850% ( 3) 00:14:04.018 2.467 - 2.480: 41.4564% ( 7784) 00:14:04.018 2.480 - 2.493: 61.0621% ( 3799) 00:14:04.018 2.493 - 2.507: 72.3383% ( 2185) 00:14:04.018 2.507 - 2.520: 79.6408% ( 1415) 00:14:04.018 2.520 - 2.533: 81.6380% ( 387) 00:14:04.018 2.533 - 2.547: 84.1823% ( 493) 00:14:04.018 2.547 - 2.560: 90.1688% ( 1160) 00:14:04.018 2.560 - 2.573: 94.6328% ( 865) 00:14:04.018 2.573 - 2.587: 97.0945% ( 477) 00:14:04.018 2.587 - 2.600: 98.6014% ( 292) 00:14:04.018 2.600 - 2.613: 99.1691% ( 110) 00:14:04.018 2.613 - 2.627: 99.3343% ( 32) 00:14:04.018 2.627 - 2.640: 99.3755% ( 8) 00:14:04.018 2.640 - 2.653: 99.3807% ( 1) 00:14:04.018 4.720 - 4.747: 99.3859% ( 1) 00:14:04.018 4.747 - 4.773: 99.3910% ( 1) 00:14:04.018 4.853 - 4.880: 99.3962% ( 1) 00:14:04.018 4.960 - 4.987: 99.4065% ( 2) 00:14:04.018 5.120 - 5.147: 99.4117% ( 1) 00:14:04.018 5.227 - 5.253: 99.4168% ( 1) 00:14:04.018 5.253 - 5.280: 99.4220% ( 1) 00:14:04.018 5.280 - 5.307: 99.4323% ( 2) 00:14:04.018 5.307 - 5.333: 99.4426% ( 2) 00:14:04.018 5.360 - 5.387: 99.4478% ( 1) 00:14:04.018 5.387 - 5.413: 99.4530% ( 1) 00:14:04.018 5.413 - 5.440: 99.4633% ( 2) 00:14:04.018 5.440 - 5.467: 99.4684% ( 1) 00:14:04.018 5.493 - 5.520: 99.4891% ( 4) 00:14:04.018 5.547 - 5.573: 99.4942% ( 1) 00:14:04.018 5.600 - 5.627: 99.4994% ( 1) 00:14:04.018 5.653 - 5.680: 99.5046% ( 1) 00:14:04.018 5.680 - 5.707: 99.5097% ( 1) 00:14:04.018 5.733 - 5.760: 99.5149% ( 1) 00:14:04.018 5.760 - 5.787: 99.5200% ( 1) 00:14:04.018 5.787 - 5.813: 99.5252% ( 1) 00:14:04.018 5.840 - 5.867: 99.5304% ( 1) 00:14:04.018 5.920 - 5.947: 99.5355% ( 1) 00:14:04.018 5.947 - 5.973: 99.5407% ( 1) 00:14:04.018 6.000 - 6.027: 99.5459% ( 1) 00:14:04.018 6.053 - 6.080: 99.5613% ( 3) 00:14:04.018 6.213 - 6.240: 99.5665% ( 1) 00:14:04.018 6.347 - 6.373: 99.5768% ( 2) 00:14:04.018 6.373 - 6.400: 99.5820% ( 1) 00:14:04.018 6.400 - 6.427: 99.5871% ( 1) 00:14:04.018 6.613 - 6.640: 99.5923% ( 1) 00:14:04.018 6.880 - 6.933: 99.5975% ( 1) 00:14:04.018 11.253 - 11.307: 99.6026% ( 1) 00:14:04.018 13.280 - 13.333: 99.6078% ( 1) 00:14:04.018 16.213 - 16.320: 99.6129% ( 1) 00:14:04.018 3850.240 - 3877.547: 99.6181% ( 1) 00:14:04.018 3986.773 - 4014.080: 100.0000% ( 74) 00:14:04.018 00:14:04.018 09:22:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:04.018 09:22:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:04.019 09:22:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:04.019 09:22:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:04.019 09:22:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:04.277 [ 00:14:04.277 { 00:14:04.277 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:04.277 "subtype": "Discovery", 00:14:04.277 "listen_addresses": [], 00:14:04.277 "allow_any_host": true, 00:14:04.277 "hosts": [] 00:14:04.277 }, 00:14:04.277 { 00:14:04.277 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:04.277 "subtype": "NVMe", 00:14:04.277 "listen_addresses": [ 00:14:04.277 { 00:14:04.277 "trtype": "VFIOUSER", 00:14:04.277 "adrfam": "IPv4", 00:14:04.277 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:04.277 "trsvcid": "0" 00:14:04.277 } 00:14:04.277 ], 00:14:04.277 "allow_any_host": true, 00:14:04.277 "hosts": [], 00:14:04.277 "serial_number": "SPDK1", 00:14:04.277 "model_number": "SPDK bdev Controller", 00:14:04.277 "max_namespaces": 32, 00:14:04.277 "min_cntlid": 1, 00:14:04.277 "max_cntlid": 65519, 00:14:04.277 "namespaces": [ 00:14:04.277 { 00:14:04.277 "nsid": 1, 00:14:04.277 "bdev_name": "Malloc1", 00:14:04.277 "name": "Malloc1", 00:14:04.277 "nguid": "AAC0C051E20D4B2C9F570F006938CBDB", 00:14:04.277 "uuid": "aac0c051-e20d-4b2c-9f57-0f006938cbdb" 00:14:04.277 } 00:14:04.277 ] 00:14:04.277 }, 00:14:04.277 { 00:14:04.277 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:04.277 "subtype": "NVMe", 00:14:04.277 "listen_addresses": [ 00:14:04.277 { 00:14:04.277 "trtype": "VFIOUSER", 00:14:04.277 "adrfam": "IPv4", 00:14:04.277 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:04.277 "trsvcid": "0" 00:14:04.277 } 00:14:04.277 ], 00:14:04.277 "allow_any_host": true, 00:14:04.277 "hosts": [], 00:14:04.277 "serial_number": "SPDK2", 00:14:04.277 "model_number": "SPDK bdev Controller", 00:14:04.277 "max_namespaces": 32, 00:14:04.277 "min_cntlid": 1, 00:14:04.277 "max_cntlid": 65519, 00:14:04.277 "namespaces": [ 00:14:04.277 { 00:14:04.277 "nsid": 1, 00:14:04.277 "bdev_name": "Malloc2", 00:14:04.277 "name": "Malloc2", 00:14:04.277 "nguid": "B03E7896CE9640E5812331B995E6D87C", 00:14:04.277 "uuid": "b03e7896-ce96-40e5-8123-31b995e6d87c" 00:14:04.277 } 00:14:04.277 ] 00:14:04.277 } 00:14:04.277 ] 00:14:04.277 09:22:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:04.277 09:22:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=606601 00:14:04.277 09:22:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:04.277 09:22:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:04.277 09:22:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:04.277 09:22:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:04.277 09:22:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:04.277 09:22:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:04.277 09:22:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:04.277 09:22:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:04.277 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.536 Malloc3 00:14:04.536 [2024-07-15 09:22:51.537230] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:04.536 09:22:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:04.536 [2024-07-15 09:22:51.701290] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:04.536 09:22:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:04.795 Asynchronous Event Request test 00:14:04.795 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:04.795 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:04.795 Registering asynchronous event callbacks... 00:14:04.795 Starting namespace attribute notice tests for all controllers... 00:14:04.795 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:04.795 aer_cb - Changed Namespace 00:14:04.795 Cleaning up... 00:14:04.795 [ 00:14:04.795 { 00:14:04.795 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:04.795 "subtype": "Discovery", 00:14:04.795 "listen_addresses": [], 00:14:04.795 "allow_any_host": true, 00:14:04.795 "hosts": [] 00:14:04.795 }, 00:14:04.795 { 00:14:04.795 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:04.795 "subtype": "NVMe", 00:14:04.795 "listen_addresses": [ 00:14:04.795 { 00:14:04.795 "trtype": "VFIOUSER", 00:14:04.795 "adrfam": "IPv4", 00:14:04.795 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:04.795 "trsvcid": "0" 00:14:04.795 } 00:14:04.795 ], 00:14:04.795 "allow_any_host": true, 00:14:04.795 "hosts": [], 00:14:04.795 "serial_number": "SPDK1", 00:14:04.795 "model_number": "SPDK bdev Controller", 00:14:04.795 "max_namespaces": 32, 00:14:04.795 "min_cntlid": 1, 00:14:04.795 "max_cntlid": 65519, 00:14:04.795 "namespaces": [ 00:14:04.795 { 00:14:04.795 "nsid": 1, 00:14:04.795 "bdev_name": "Malloc1", 00:14:04.795 "name": "Malloc1", 00:14:04.795 "nguid": "AAC0C051E20D4B2C9F570F006938CBDB", 00:14:04.795 "uuid": "aac0c051-e20d-4b2c-9f57-0f006938cbdb" 00:14:04.795 }, 00:14:04.795 { 00:14:04.795 "nsid": 2, 00:14:04.795 "bdev_name": "Malloc3", 00:14:04.795 "name": "Malloc3", 00:14:04.795 "nguid": "348B245517A6403F89C65D9B5B851703", 00:14:04.795 "uuid": "348b2455-17a6-403f-89c6-5d9b5b851703" 00:14:04.795 } 00:14:04.795 ] 00:14:04.795 }, 00:14:04.795 { 00:14:04.795 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:04.795 "subtype": "NVMe", 00:14:04.795 "listen_addresses": [ 00:14:04.795 { 00:14:04.795 "trtype": "VFIOUSER", 00:14:04.795 "adrfam": "IPv4", 00:14:04.795 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:04.795 "trsvcid": "0" 00:14:04.795 } 00:14:04.795 ], 00:14:04.796 "allow_any_host": true, 00:14:04.796 "hosts": [], 00:14:04.796 "serial_number": "SPDK2", 00:14:04.796 "model_number": "SPDK bdev Controller", 00:14:04.796 "max_namespaces": 32, 00:14:04.796 "min_cntlid": 1, 00:14:04.796 "max_cntlid": 65519, 00:14:04.796 "namespaces": [ 00:14:04.796 { 00:14:04.796 "nsid": 1, 00:14:04.796 "bdev_name": "Malloc2", 00:14:04.796 "name": "Malloc2", 00:14:04.796 "nguid": "B03E7896CE9640E5812331B995E6D87C", 00:14:04.796 "uuid": "b03e7896-ce96-40e5-8123-31b995e6d87c" 00:14:04.796 } 00:14:04.796 ] 00:14:04.796 } 00:14:04.796 ] 00:14:04.796 09:22:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 606601 00:14:04.796 09:22:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:04.796 09:22:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:04.796 09:22:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:04.796 09:22:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:04.796 [2024-07-15 09:22:51.915207] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:14:04.796 [2024-07-15 09:22:51.915266] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid606616 ] 00:14:04.796 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.796 [2024-07-15 09:22:51.946330] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:04.796 [2024-07-15 09:22:51.951534] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:04.796 [2024-07-15 09:22:51.951556] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdc6f6c5000 00:14:04.796 [2024-07-15 09:22:51.952532] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.796 [2024-07-15 09:22:51.953541] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.796 [2024-07-15 09:22:51.954551] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.796 [2024-07-15 09:22:51.955557] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:04.796 [2024-07-15 09:22:51.956564] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:04.796 [2024-07-15 09:22:51.957571] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.796 [2024-07-15 09:22:51.958578] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:04.796 [2024-07-15 09:22:51.959603] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.796 [2024-07-15 09:22:51.960598] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:04.796 [2024-07-15 09:22:51.960608] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdc6f6ba000 00:14:04.796 [2024-07-15 09:22:51.961935] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:04.796 [2024-07-15 09:22:51.982907] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:04.796 [2024-07-15 09:22:51.982929] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:04.796 [2024-07-15 09:22:51.984989] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:04.796 [2024-07-15 09:22:51.985035] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:04.796 [2024-07-15 09:22:51.985118] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:04.796 [2024-07-15 09:22:51.985134] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:04.796 [2024-07-15 09:22:51.985139] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:04.796 [2024-07-15 09:22:51.985996] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:04.796 [2024-07-15 09:22:51.986005] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:04.796 [2024-07-15 09:22:51.986012] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:04.796 [2024-07-15 09:22:51.987004] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:04.796 [2024-07-15 09:22:51.987013] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:04.796 [2024-07-15 09:22:51.987020] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:04.796 [2024-07-15 09:22:51.988005] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:04.796 [2024-07-15 09:22:51.988014] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:04.796 [2024-07-15 09:22:51.989012] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:04.796 [2024-07-15 09:22:51.989020] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:04.796 [2024-07-15 09:22:51.989025] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:04.796 [2024-07-15 09:22:51.989032] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:04.796 [2024-07-15 09:22:51.989137] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:04.796 [2024-07-15 09:22:51.989142] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:04.796 [2024-07-15 09:22:51.989149] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:04.796 [2024-07-15 09:22:51.990017] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:04.796 [2024-07-15 09:22:51.991026] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:04.796 [2024-07-15 09:22:51.992030] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:04.796 [2024-07-15 09:22:51.993038] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:04.796 [2024-07-15 09:22:51.993076] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:04.796 [2024-07-15 09:22:51.994051] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:04.796 [2024-07-15 09:22:51.994059] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:04.796 [2024-07-15 09:22:51.994064] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:04.796 [2024-07-15 09:22:51.994085] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:04.796 [2024-07-15 09:22:51.994092] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:04.796 [2024-07-15 09:22:51.994104] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:04.796 [2024-07-15 09:22:51.994109] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.796 [2024-07-15 09:22:51.994121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:05.058 [2024-07-15 09:22:52.000759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:05.058 [2024-07-15 09:22:52.000771] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:05.058 [2024-07-15 09:22:52.000779] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:05.058 [2024-07-15 09:22:52.000783] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:05.058 [2024-07-15 09:22:52.000788] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:05.058 [2024-07-15 09:22:52.000792] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:05.058 [2024-07-15 09:22:52.000797] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:05.058 [2024-07-15 09:22:52.000801] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.000809] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.000819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:05.058 [2024-07-15 09:22:52.008757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:05.058 [2024-07-15 09:22:52.008774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.058 [2024-07-15 09:22:52.008782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.058 [2024-07-15 09:22:52.008791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.058 [2024-07-15 09:22:52.008799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.058 [2024-07-15 09:22:52.008804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.008812] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.008821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:05.058 [2024-07-15 09:22:52.016757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:05.058 [2024-07-15 09:22:52.016765] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:05.058 [2024-07-15 09:22:52.016770] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.016777] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.016782] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.016792] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:05.058 [2024-07-15 09:22:52.024757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:05.058 [2024-07-15 09:22:52.024819] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.024827] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.024835] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:05.058 [2024-07-15 09:22:52.024839] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:05.058 [2024-07-15 09:22:52.024845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:05.058 [2024-07-15 09:22:52.032757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:05.058 [2024-07-15 09:22:52.032768] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:05.058 [2024-07-15 09:22:52.032777] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.032785] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.032792] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:05.058 [2024-07-15 09:22:52.032796] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:05.058 [2024-07-15 09:22:52.032802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:05.058 [2024-07-15 09:22:52.040757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:05.058 [2024-07-15 09:22:52.040772] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.040780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.040787] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:05.058 [2024-07-15 09:22:52.040791] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:05.058 [2024-07-15 09:22:52.040797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:05.058 [2024-07-15 09:22:52.048756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:05.058 [2024-07-15 09:22:52.048767] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.048773] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.048781] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.048787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.048792] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.048797] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.048802] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:05.058 [2024-07-15 09:22:52.048806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:05.058 [2024-07-15 09:22:52.048811] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:05.058 [2024-07-15 09:22:52.048827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:05.058 [2024-07-15 09:22:52.056757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:05.058 [2024-07-15 09:22:52.056772] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:05.058 [2024-07-15 09:22:52.064758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:05.058 [2024-07-15 09:22:52.064771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:05.058 [2024-07-15 09:22:52.072759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:05.058 [2024-07-15 09:22:52.072772] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:05.058 [2024-07-15 09:22:52.080757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:05.058 [2024-07-15 09:22:52.080776] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:05.058 [2024-07-15 09:22:52.080783] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:05.058 [2024-07-15 09:22:52.080787] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:05.058 [2024-07-15 09:22:52.080791] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:05.058 [2024-07-15 09:22:52.080797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:05.058 [2024-07-15 09:22:52.080805] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:05.058 [2024-07-15 09:22:52.080809] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:05.058 [2024-07-15 09:22:52.080815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:05.058 [2024-07-15 09:22:52.080822] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:05.058 [2024-07-15 09:22:52.080826] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:05.058 [2024-07-15 09:22:52.080832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:05.058 [2024-07-15 09:22:52.080839] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:05.059 [2024-07-15 09:22:52.080844] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:05.059 [2024-07-15 09:22:52.080849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:05.059 [2024-07-15 09:22:52.088759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:05.059 [2024-07-15 09:22:52.088775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:05.059 [2024-07-15 09:22:52.088785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:05.059 [2024-07-15 09:22:52.088792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:05.059 ===================================================== 00:14:05.059 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:05.059 ===================================================== 00:14:05.059 Controller Capabilities/Features 00:14:05.059 ================================ 00:14:05.059 Vendor ID: 4e58 00:14:05.059 Subsystem Vendor ID: 4e58 00:14:05.059 Serial Number: SPDK2 00:14:05.059 Model Number: SPDK bdev Controller 00:14:05.059 Firmware Version: 24.09 00:14:05.059 Recommended Arb Burst: 6 00:14:05.059 IEEE OUI Identifier: 8d 6b 50 00:14:05.059 Multi-path I/O 00:14:05.059 May have multiple subsystem ports: Yes 00:14:05.059 May have multiple controllers: Yes 00:14:05.059 Associated with SR-IOV VF: No 00:14:05.059 Max Data Transfer Size: 131072 00:14:05.059 Max Number of Namespaces: 32 00:14:05.059 Max Number of I/O Queues: 127 00:14:05.059 NVMe Specification Version (VS): 1.3 00:14:05.059 NVMe Specification Version (Identify): 1.3 00:14:05.059 Maximum Queue Entries: 256 00:14:05.059 Contiguous Queues Required: Yes 00:14:05.059 Arbitration Mechanisms Supported 00:14:05.059 Weighted Round Robin: Not Supported 00:14:05.059 Vendor Specific: Not Supported 00:14:05.059 Reset Timeout: 15000 ms 00:14:05.059 Doorbell Stride: 4 bytes 00:14:05.059 NVM Subsystem Reset: Not Supported 00:14:05.059 Command Sets Supported 00:14:05.059 NVM Command Set: Supported 00:14:05.059 Boot Partition: Not Supported 00:14:05.059 Memory Page Size Minimum: 4096 bytes 00:14:05.059 Memory Page Size Maximum: 4096 bytes 00:14:05.059 Persistent Memory Region: Not Supported 00:14:05.059 Optional Asynchronous Events Supported 00:14:05.059 Namespace Attribute Notices: Supported 00:14:05.059 Firmware Activation Notices: Not Supported 00:14:05.059 ANA Change Notices: Not Supported 00:14:05.059 PLE Aggregate Log Change Notices: Not Supported 00:14:05.059 LBA Status Info Alert Notices: Not Supported 00:14:05.059 EGE Aggregate Log Change Notices: Not Supported 00:14:05.059 Normal NVM Subsystem Shutdown event: Not Supported 00:14:05.059 Zone Descriptor Change Notices: Not Supported 00:14:05.059 Discovery Log Change Notices: Not Supported 00:14:05.059 Controller Attributes 00:14:05.059 128-bit Host Identifier: Supported 00:14:05.059 Non-Operational Permissive Mode: Not Supported 00:14:05.059 NVM Sets: Not Supported 00:14:05.059 Read Recovery Levels: Not Supported 00:14:05.059 Endurance Groups: Not Supported 00:14:05.059 Predictable Latency Mode: Not Supported 00:14:05.059 Traffic Based Keep ALive: Not Supported 00:14:05.059 Namespace Granularity: Not Supported 00:14:05.059 SQ Associations: Not Supported 00:14:05.059 UUID List: Not Supported 00:14:05.059 Multi-Domain Subsystem: Not Supported 00:14:05.059 Fixed Capacity Management: Not Supported 00:14:05.059 Variable Capacity Management: Not Supported 00:14:05.059 Delete Endurance Group: Not Supported 00:14:05.059 Delete NVM Set: Not Supported 00:14:05.059 Extended LBA Formats Supported: Not Supported 00:14:05.059 Flexible Data Placement Supported: Not Supported 00:14:05.059 00:14:05.059 Controller Memory Buffer Support 00:14:05.059 ================================ 00:14:05.059 Supported: No 00:14:05.059 00:14:05.059 Persistent Memory Region Support 00:14:05.059 ================================ 00:14:05.059 Supported: No 00:14:05.059 00:14:05.059 Admin Command Set Attributes 00:14:05.059 ============================ 00:14:05.059 Security Send/Receive: Not Supported 00:14:05.059 Format NVM: Not Supported 00:14:05.059 Firmware Activate/Download: Not Supported 00:14:05.059 Namespace Management: Not Supported 00:14:05.059 Device Self-Test: Not Supported 00:14:05.059 Directives: Not Supported 00:14:05.059 NVMe-MI: Not Supported 00:14:05.059 Virtualization Management: Not Supported 00:14:05.059 Doorbell Buffer Config: Not Supported 00:14:05.059 Get LBA Status Capability: Not Supported 00:14:05.059 Command & Feature Lockdown Capability: Not Supported 00:14:05.059 Abort Command Limit: 4 00:14:05.059 Async Event Request Limit: 4 00:14:05.059 Number of Firmware Slots: N/A 00:14:05.059 Firmware Slot 1 Read-Only: N/A 00:14:05.059 Firmware Activation Without Reset: N/A 00:14:05.059 Multiple Update Detection Support: N/A 00:14:05.059 Firmware Update Granularity: No Information Provided 00:14:05.059 Per-Namespace SMART Log: No 00:14:05.059 Asymmetric Namespace Access Log Page: Not Supported 00:14:05.059 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:05.059 Command Effects Log Page: Supported 00:14:05.059 Get Log Page Extended Data: Supported 00:14:05.059 Telemetry Log Pages: Not Supported 00:14:05.059 Persistent Event Log Pages: Not Supported 00:14:05.059 Supported Log Pages Log Page: May Support 00:14:05.059 Commands Supported & Effects Log Page: Not Supported 00:14:05.059 Feature Identifiers & Effects Log Page:May Support 00:14:05.059 NVMe-MI Commands & Effects Log Page: May Support 00:14:05.059 Data Area 4 for Telemetry Log: Not Supported 00:14:05.059 Error Log Page Entries Supported: 128 00:14:05.059 Keep Alive: Supported 00:14:05.059 Keep Alive Granularity: 10000 ms 00:14:05.059 00:14:05.059 NVM Command Set Attributes 00:14:05.059 ========================== 00:14:05.059 Submission Queue Entry Size 00:14:05.059 Max: 64 00:14:05.059 Min: 64 00:14:05.059 Completion Queue Entry Size 00:14:05.059 Max: 16 00:14:05.059 Min: 16 00:14:05.059 Number of Namespaces: 32 00:14:05.059 Compare Command: Supported 00:14:05.059 Write Uncorrectable Command: Not Supported 00:14:05.059 Dataset Management Command: Supported 00:14:05.059 Write Zeroes Command: Supported 00:14:05.059 Set Features Save Field: Not Supported 00:14:05.059 Reservations: Not Supported 00:14:05.059 Timestamp: Not Supported 00:14:05.059 Copy: Supported 00:14:05.059 Volatile Write Cache: Present 00:14:05.059 Atomic Write Unit (Normal): 1 00:14:05.059 Atomic Write Unit (PFail): 1 00:14:05.059 Atomic Compare & Write Unit: 1 00:14:05.059 Fused Compare & Write: Supported 00:14:05.059 Scatter-Gather List 00:14:05.059 SGL Command Set: Supported (Dword aligned) 00:14:05.059 SGL Keyed: Not Supported 00:14:05.059 SGL Bit Bucket Descriptor: Not Supported 00:14:05.059 SGL Metadata Pointer: Not Supported 00:14:05.059 Oversized SGL: Not Supported 00:14:05.059 SGL Metadata Address: Not Supported 00:14:05.059 SGL Offset: Not Supported 00:14:05.059 Transport SGL Data Block: Not Supported 00:14:05.059 Replay Protected Memory Block: Not Supported 00:14:05.059 00:14:05.059 Firmware Slot Information 00:14:05.059 ========================= 00:14:05.059 Active slot: 1 00:14:05.059 Slot 1 Firmware Revision: 24.09 00:14:05.059 00:14:05.059 00:14:05.059 Commands Supported and Effects 00:14:05.059 ============================== 00:14:05.059 Admin Commands 00:14:05.059 -------------- 00:14:05.059 Get Log Page (02h): Supported 00:14:05.059 Identify (06h): Supported 00:14:05.059 Abort (08h): Supported 00:14:05.059 Set Features (09h): Supported 00:14:05.059 Get Features (0Ah): Supported 00:14:05.059 Asynchronous Event Request (0Ch): Supported 00:14:05.059 Keep Alive (18h): Supported 00:14:05.059 I/O Commands 00:14:05.059 ------------ 00:14:05.059 Flush (00h): Supported LBA-Change 00:14:05.059 Write (01h): Supported LBA-Change 00:14:05.059 Read (02h): Supported 00:14:05.059 Compare (05h): Supported 00:14:05.059 Write Zeroes (08h): Supported LBA-Change 00:14:05.059 Dataset Management (09h): Supported LBA-Change 00:14:05.059 Copy (19h): Supported LBA-Change 00:14:05.059 00:14:05.059 Error Log 00:14:05.059 ========= 00:14:05.059 00:14:05.059 Arbitration 00:14:05.059 =========== 00:14:05.059 Arbitration Burst: 1 00:14:05.059 00:14:05.059 Power Management 00:14:05.059 ================ 00:14:05.059 Number of Power States: 1 00:14:05.059 Current Power State: Power State #0 00:14:05.059 Power State #0: 00:14:05.059 Max Power: 0.00 W 00:14:05.059 Non-Operational State: Operational 00:14:05.059 Entry Latency: Not Reported 00:14:05.059 Exit Latency: Not Reported 00:14:05.059 Relative Read Throughput: 0 00:14:05.059 Relative Read Latency: 0 00:14:05.059 Relative Write Throughput: 0 00:14:05.059 Relative Write Latency: 0 00:14:05.059 Idle Power: Not Reported 00:14:05.059 Active Power: Not Reported 00:14:05.059 Non-Operational Permissive Mode: Not Supported 00:14:05.059 00:14:05.059 Health Information 00:14:05.059 ================== 00:14:05.059 Critical Warnings: 00:14:05.059 Available Spare Space: OK 00:14:05.059 Temperature: OK 00:14:05.059 Device Reliability: OK 00:14:05.059 Read Only: No 00:14:05.059 Volatile Memory Backup: OK 00:14:05.059 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:05.059 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:05.059 Available Spare: 0% 00:14:05.059 Available Sp[2024-07-15 09:22:52.088892] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:05.059 [2024-07-15 09:22:52.096759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:05.059 [2024-07-15 09:22:52.096791] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:05.059 [2024-07-15 09:22:52.096801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.059 [2024-07-15 09:22:52.096808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.059 [2024-07-15 09:22:52.096814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.059 [2024-07-15 09:22:52.096821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.059 [2024-07-15 09:22:52.096858] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:05.059 [2024-07-15 09:22:52.096869] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:05.059 [2024-07-15 09:22:52.097867] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:05.059 [2024-07-15 09:22:52.097921] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:05.059 [2024-07-15 09:22:52.097928] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:05.059 [2024-07-15 09:22:52.098872] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:05.059 [2024-07-15 09:22:52.098884] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:05.059 [2024-07-15 09:22:52.098931] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:05.059 [2024-07-15 09:22:52.101760] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:05.059 are Threshold: 0% 00:14:05.059 Life Percentage Used: 0% 00:14:05.059 Data Units Read: 0 00:14:05.059 Data Units Written: 0 00:14:05.059 Host Read Commands: 0 00:14:05.059 Host Write Commands: 0 00:14:05.059 Controller Busy Time: 0 minutes 00:14:05.059 Power Cycles: 0 00:14:05.059 Power On Hours: 0 hours 00:14:05.059 Unsafe Shutdowns: 0 00:14:05.059 Unrecoverable Media Errors: 0 00:14:05.059 Lifetime Error Log Entries: 0 00:14:05.059 Warning Temperature Time: 0 minutes 00:14:05.059 Critical Temperature Time: 0 minutes 00:14:05.059 00:14:05.059 Number of Queues 00:14:05.059 ================ 00:14:05.059 Number of I/O Submission Queues: 127 00:14:05.059 Number of I/O Completion Queues: 127 00:14:05.059 00:14:05.059 Active Namespaces 00:14:05.059 ================= 00:14:05.059 Namespace ID:1 00:14:05.059 Error Recovery Timeout: Unlimited 00:14:05.059 Command Set Identifier: NVM (00h) 00:14:05.059 Deallocate: Supported 00:14:05.059 Deallocated/Unwritten Error: Not Supported 00:14:05.059 Deallocated Read Value: Unknown 00:14:05.059 Deallocate in Write Zeroes: Not Supported 00:14:05.059 Deallocated Guard Field: 0xFFFF 00:14:05.059 Flush: Supported 00:14:05.059 Reservation: Supported 00:14:05.059 Namespace Sharing Capabilities: Multiple Controllers 00:14:05.059 Size (in LBAs): 131072 (0GiB) 00:14:05.059 Capacity (in LBAs): 131072 (0GiB) 00:14:05.059 Utilization (in LBAs): 131072 (0GiB) 00:14:05.059 NGUID: B03E7896CE9640E5812331B995E6D87C 00:14:05.059 UUID: b03e7896-ce96-40e5-8123-31b995e6d87c 00:14:05.059 Thin Provisioning: Not Supported 00:14:05.059 Per-NS Atomic Units: Yes 00:14:05.059 Atomic Boundary Size (Normal): 0 00:14:05.059 Atomic Boundary Size (PFail): 0 00:14:05.059 Atomic Boundary Offset: 0 00:14:05.059 Maximum Single Source Range Length: 65535 00:14:05.059 Maximum Copy Length: 65535 00:14:05.059 Maximum Source Range Count: 1 00:14:05.059 NGUID/EUI64 Never Reused: No 00:14:05.059 Namespace Write Protected: No 00:14:05.059 Number of LBA Formats: 1 00:14:05.059 Current LBA Format: LBA Format #00 00:14:05.059 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:05.059 00:14:05.059 09:22:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:05.059 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.320 [2024-07-15 09:22:52.291790] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:10.604 Initializing NVMe Controllers 00:14:10.604 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:10.604 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:10.604 Initialization complete. Launching workers. 00:14:10.604 ======================================================== 00:14:10.604 Latency(us) 00:14:10.604 Device Information : IOPS MiB/s Average min max 00:14:10.604 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40022.60 156.34 3200.58 829.15 6857.00 00:14:10.604 ======================================================== 00:14:10.604 Total : 40022.60 156.34 3200.58 829.15 6857.00 00:14:10.604 00:14:10.604 [2024-07-15 09:22:57.398936] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:10.604 09:22:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:10.604 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.604 [2024-07-15 09:22:57.582577] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:15.894 Initializing NVMe Controllers 00:14:15.894 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:15.894 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:15.894 Initialization complete. Launching workers. 00:14:15.894 ======================================================== 00:14:15.894 Latency(us) 00:14:15.894 Device Information : IOPS MiB/s Average min max 00:14:15.894 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35974.60 140.53 3558.20 1098.78 8168.26 00:14:15.894 ======================================================== 00:14:15.894 Total : 35974.60 140.53 3558.20 1098.78 8168.26 00:14:15.894 00:14:15.894 [2024-07-15 09:23:02.602917] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:15.894 09:23:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:15.894 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.894 [2024-07-15 09:23:02.806153] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:21.182 [2024-07-15 09:23:07.939833] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:21.182 Initializing NVMe Controllers 00:14:21.182 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:21.182 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:21.182 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:21.182 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:21.182 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:21.182 Initialization complete. Launching workers. 00:14:21.182 Starting thread on core 2 00:14:21.182 Starting thread on core 3 00:14:21.182 Starting thread on core 1 00:14:21.182 09:23:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:21.182 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.182 [2024-07-15 09:23:08.207977] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:24.481 [2024-07-15 09:23:11.273563] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:24.481 Initializing NVMe Controllers 00:14:24.481 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:24.481 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:24.481 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:24.481 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:24.481 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:24.481 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:24.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:24.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:24.481 Initialization complete. Launching workers. 00:14:24.481 Starting thread on core 1 with urgent priority queue 00:14:24.481 Starting thread on core 2 with urgent priority queue 00:14:24.481 Starting thread on core 3 with urgent priority queue 00:14:24.481 Starting thread on core 0 with urgent priority queue 00:14:24.481 SPDK bdev Controller (SPDK2 ) core 0: 12243.67 IO/s 8.17 secs/100000 ios 00:14:24.481 SPDK bdev Controller (SPDK2 ) core 1: 8704.67 IO/s 11.49 secs/100000 ios 00:14:24.481 SPDK bdev Controller (SPDK2 ) core 2: 8042.67 IO/s 12.43 secs/100000 ios 00:14:24.481 SPDK bdev Controller (SPDK2 ) core 3: 8021.33 IO/s 12.47 secs/100000 ios 00:14:24.481 ======================================================== 00:14:24.481 00:14:24.481 09:23:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:24.481 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.481 [2024-07-15 09:23:11.545229] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:24.481 Initializing NVMe Controllers 00:14:24.481 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:24.481 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:24.481 Namespace ID: 1 size: 0GB 00:14:24.481 Initialization complete. 00:14:24.481 INFO: using host memory buffer for IO 00:14:24.481 Hello world! 00:14:24.481 [2024-07-15 09:23:11.555281] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:24.481 09:23:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:24.481 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.741 [2024-07-15 09:23:11.823654] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:26.127 Initializing NVMe Controllers 00:14:26.127 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:26.127 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:26.127 Initialization complete. Launching workers. 00:14:26.127 submit (in ns) avg, min, max = 6518.8, 3900.8, 3999728.3 00:14:26.127 complete (in ns) avg, min, max = 18655.5, 2407.5, 4039218.3 00:14:26.127 00:14:26.127 Submit histogram 00:14:26.127 ================ 00:14:26.127 Range in us Cumulative Count 00:14:26.127 3.893 - 3.920: 0.9589% ( 187) 00:14:26.127 3.920 - 3.947: 5.8971% ( 963) 00:14:26.127 3.947 - 3.973: 12.8506% ( 1356) 00:14:26.127 3.973 - 4.000: 23.5014% ( 2077) 00:14:26.127 4.000 - 4.027: 35.2802% ( 2297) 00:14:26.127 4.027 - 4.053: 47.9411% ( 2469) 00:14:26.127 4.053 - 4.080: 64.4531% ( 3220) 00:14:26.128 4.080 - 4.107: 80.0010% ( 3032) 00:14:26.128 4.107 - 4.133: 90.5954% ( 2066) 00:14:26.128 4.133 - 4.160: 95.8464% ( 1024) 00:14:26.128 4.160 - 4.187: 98.2668% ( 472) 00:14:26.128 4.187 - 4.213: 99.1231% ( 167) 00:14:26.128 4.213 - 4.240: 99.4359% ( 61) 00:14:26.128 4.240 - 4.267: 99.5436% ( 21) 00:14:26.128 4.267 - 4.293: 99.5744% ( 6) 00:14:26.128 4.320 - 4.347: 99.5795% ( 1) 00:14:26.128 4.427 - 4.453: 99.5846% ( 1) 00:14:26.128 4.453 - 4.480: 99.5898% ( 1) 00:14:26.128 4.533 - 4.560: 99.5949% ( 1) 00:14:26.128 4.613 - 4.640: 99.6000% ( 1) 00:14:26.128 4.827 - 4.853: 99.6051% ( 1) 00:14:26.128 4.880 - 4.907: 99.6103% ( 1) 00:14:26.128 4.987 - 5.013: 99.6154% ( 1) 00:14:26.128 5.200 - 5.227: 99.6205% ( 1) 00:14:26.128 5.333 - 5.360: 99.6257% ( 1) 00:14:26.128 5.387 - 5.413: 99.6308% ( 1) 00:14:26.128 5.467 - 5.493: 99.6359% ( 1) 00:14:26.128 5.600 - 5.627: 99.6410% ( 1) 00:14:26.128 5.627 - 5.653: 99.6513% ( 2) 00:14:26.128 5.733 - 5.760: 99.6564% ( 1) 00:14:26.128 5.920 - 5.947: 99.6616% ( 1) 00:14:26.128 5.973 - 6.000: 99.6718% ( 2) 00:14:26.128 6.000 - 6.027: 99.6821% ( 2) 00:14:26.128 6.053 - 6.080: 99.6872% ( 1) 00:14:26.128 6.080 - 6.107: 99.6975% ( 2) 00:14:26.128 6.107 - 6.133: 99.7026% ( 1) 00:14:26.128 6.133 - 6.160: 99.7077% ( 1) 00:14:26.128 6.160 - 6.187: 99.7180% ( 2) 00:14:26.128 6.187 - 6.213: 99.7282% ( 2) 00:14:26.128 6.213 - 6.240: 99.7333% ( 1) 00:14:26.128 6.240 - 6.267: 99.7436% ( 2) 00:14:26.128 6.267 - 6.293: 99.7539% ( 2) 00:14:26.128 6.320 - 6.347: 99.7641% ( 2) 00:14:26.128 6.400 - 6.427: 99.7692% ( 1) 00:14:26.128 6.507 - 6.533: 99.7744% ( 1) 00:14:26.128 6.560 - 6.587: 99.7795% ( 1) 00:14:26.128 6.587 - 6.613: 99.7949% ( 3) 00:14:26.128 6.613 - 6.640: 99.8000% ( 1) 00:14:26.128 6.667 - 6.693: 99.8051% ( 1) 00:14:26.128 6.747 - 6.773: 99.8103% ( 1) 00:14:26.128 6.773 - 6.800: 99.8154% ( 1) 00:14:26.128 6.800 - 6.827: 99.8205% ( 1) 00:14:26.128 6.827 - 6.880: 99.8308% ( 2) 00:14:26.128 7.040 - 7.093: 99.8410% ( 2) 00:14:26.128 7.147 - 7.200: 99.8462% ( 1) 00:14:26.128 7.200 - 7.253: 99.8615% ( 3) 00:14:26.128 7.307 - 7.360: 99.8821% ( 4) 00:14:26.128 7.520 - 7.573: 99.8923% ( 2) 00:14:26.128 7.627 - 7.680: 99.9077% ( 3) 00:14:26.128 7.787 - 7.840: 99.9128% ( 1) 00:14:26.128 7.947 - 8.000: 99.9180% ( 1) 00:14:26.128 8.907 - 8.960: 99.9231% ( 1) 00:14:26.128 9.120 - 9.173: 99.9282% ( 1) 00:14:26.128 12.107 - 12.160: 99.9333% ( 1) 00:14:26.128 16.747 - 16.853: 99.9385% ( 1) 00:14:26.128 3986.773 - 4014.080: 100.0000% ( 12) 00:14:26.128 00:14:26.128 Complete histogram 00:14:26.128 ================== 00:14:26.128 Range in us Cumulative Count 00:14:26.128 2.400 - 2.413: 0.3128% ( 61) 00:14:26.128 2.413 - 2.427: 0.9282% ( 120) 00:14:26.128 2.427 - 2.440: 1.0307% ( 20) 00:14:26.128 2.440 - 2.453: 1.1487% ( 23) 00:14:26.128 2.453 - 2.467: 1.1897% ( 8) 00:14:26.128 2.467 - 2.480: 38.3314% ( 7243) 00:14:26.128 2.480 - 2.493: 57.8432% ( 3805) 00:14:26.128 2.493 - 2.507: 69.3349% ( 2241) 00:14:26.128 2.507 - 2.520: 78.3037% ( 1749) 00:14:26.128 2.520 - 2.533: 81.5138% ( 626) 00:14:26.128 2.533 - 2.547: 84.0624% ( 497) 00:14:26.128 2.547 - [2024-07-15 09:23:12.924416] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:26.128 2.560: 88.9185% ( 947) 00:14:26.128 2.560 - 2.573: 93.9644% ( 984) 00:14:26.128 2.573 - 2.587: 96.7848% ( 550) 00:14:26.129 2.587 - 2.600: 98.2616% ( 288) 00:14:26.129 2.600 - 2.613: 99.0411% ( 152) 00:14:26.129 2.613 - 2.627: 99.2770% ( 46) 00:14:26.129 2.627 - 2.640: 99.3385% ( 12) 00:14:26.129 2.640 - 2.653: 99.3539% ( 3) 00:14:26.129 2.653 - 2.667: 99.3590% ( 1) 00:14:26.129 2.667 - 2.680: 99.3641% ( 1) 00:14:26.129 4.320 - 4.347: 99.3693% ( 1) 00:14:26.129 4.347 - 4.373: 99.3744% ( 1) 00:14:26.129 4.373 - 4.400: 99.3795% ( 1) 00:14:26.129 4.453 - 4.480: 99.3846% ( 1) 00:14:26.129 4.507 - 4.533: 99.3949% ( 2) 00:14:26.129 4.533 - 4.560: 99.4103% ( 3) 00:14:26.129 4.560 - 4.587: 99.4154% ( 1) 00:14:26.129 4.667 - 4.693: 99.4205% ( 1) 00:14:26.129 4.693 - 4.720: 99.4257% ( 1) 00:14:26.129 4.720 - 4.747: 99.4308% ( 1) 00:14:26.129 4.747 - 4.773: 99.4359% ( 1) 00:14:26.129 4.800 - 4.827: 99.4411% ( 1) 00:14:26.129 4.880 - 4.907: 99.4462% ( 1) 00:14:26.129 4.960 - 4.987: 99.4513% ( 1) 00:14:26.129 5.040 - 5.067: 99.4564% ( 1) 00:14:26.129 5.093 - 5.120: 99.4616% ( 1) 00:14:26.129 5.120 - 5.147: 99.4667% ( 1) 00:14:26.129 5.147 - 5.173: 99.4821% ( 3) 00:14:26.129 5.200 - 5.227: 99.4872% ( 1) 00:14:26.129 5.227 - 5.253: 99.4923% ( 1) 00:14:26.129 5.253 - 5.280: 99.4975% ( 1) 00:14:26.129 5.387 - 5.413: 99.5026% ( 1) 00:14:26.129 5.520 - 5.547: 99.5128% ( 2) 00:14:26.129 5.733 - 5.760: 99.5180% ( 1) 00:14:26.129 5.893 - 5.920: 99.5231% ( 1) 00:14:26.129 5.973 - 6.000: 99.5334% ( 2) 00:14:26.129 6.027 - 6.053: 99.5385% ( 1) 00:14:26.129 6.240 - 6.267: 99.5436% ( 1) 00:14:26.129 6.320 - 6.347: 99.5487% ( 1) 00:14:26.129 6.427 - 6.453: 99.5539% ( 1) 00:14:26.129 6.613 - 6.640: 99.5590% ( 1) 00:14:26.129 6.720 - 6.747: 99.5641% ( 1) 00:14:26.129 7.040 - 7.093: 99.5693% ( 1) 00:14:26.129 7.253 - 7.307: 99.5744% ( 1) 00:14:26.129 7.573 - 7.627: 99.5795% ( 1) 00:14:26.129 8.000 - 8.053: 99.5846% ( 1) 00:14:26.129 8.907 - 8.960: 99.5898% ( 1) 00:14:26.129 11.787 - 11.840: 99.5949% ( 1) 00:14:26.129 3549.867 - 3577.173: 99.6000% ( 1) 00:14:26.129 3659.093 - 3686.400: 99.6051% ( 1) 00:14:26.129 3986.773 - 4014.080: 99.9897% ( 75) 00:14:26.129 4014.080 - 4041.387: 100.0000% ( 2) 00:14:26.129 00:14:26.129 09:23:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:26.129 09:23:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:26.129 09:23:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:26.129 09:23:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:26.129 09:23:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:26.129 [ 00:14:26.129 { 00:14:26.129 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:26.129 "subtype": "Discovery", 00:14:26.129 "listen_addresses": [], 00:14:26.129 "allow_any_host": true, 00:14:26.129 "hosts": [] 00:14:26.129 }, 00:14:26.129 { 00:14:26.129 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:26.129 "subtype": "NVMe", 00:14:26.129 "listen_addresses": [ 00:14:26.129 { 00:14:26.129 "trtype": "VFIOUSER", 00:14:26.129 "adrfam": "IPv4", 00:14:26.129 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:26.129 "trsvcid": "0" 00:14:26.129 } 00:14:26.129 ], 00:14:26.129 "allow_any_host": true, 00:14:26.129 "hosts": [], 00:14:26.129 "serial_number": "SPDK1", 00:14:26.129 "model_number": "SPDK bdev Controller", 00:14:26.129 "max_namespaces": 32, 00:14:26.129 "min_cntlid": 1, 00:14:26.129 "max_cntlid": 65519, 00:14:26.129 "namespaces": [ 00:14:26.130 { 00:14:26.130 "nsid": 1, 00:14:26.130 "bdev_name": "Malloc1", 00:14:26.130 "name": "Malloc1", 00:14:26.130 "nguid": "AAC0C051E20D4B2C9F570F006938CBDB", 00:14:26.130 "uuid": "aac0c051-e20d-4b2c-9f57-0f006938cbdb" 00:14:26.130 }, 00:14:26.130 { 00:14:26.130 "nsid": 2, 00:14:26.130 "bdev_name": "Malloc3", 00:14:26.130 "name": "Malloc3", 00:14:26.130 "nguid": "348B245517A6403F89C65D9B5B851703", 00:14:26.130 "uuid": "348b2455-17a6-403f-89c6-5d9b5b851703" 00:14:26.130 } 00:14:26.130 ] 00:14:26.130 }, 00:14:26.130 { 00:14:26.130 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:26.130 "subtype": "NVMe", 00:14:26.130 "listen_addresses": [ 00:14:26.130 { 00:14:26.130 "trtype": "VFIOUSER", 00:14:26.130 "adrfam": "IPv4", 00:14:26.130 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:26.130 "trsvcid": "0" 00:14:26.130 } 00:14:26.130 ], 00:14:26.130 "allow_any_host": true, 00:14:26.130 "hosts": [], 00:14:26.130 "serial_number": "SPDK2", 00:14:26.130 "model_number": "SPDK bdev Controller", 00:14:26.130 "max_namespaces": 32, 00:14:26.130 "min_cntlid": 1, 00:14:26.130 "max_cntlid": 65519, 00:14:26.130 "namespaces": [ 00:14:26.130 { 00:14:26.130 "nsid": 1, 00:14:26.130 "bdev_name": "Malloc2", 00:14:26.130 "name": "Malloc2", 00:14:26.130 "nguid": "B03E7896CE9640E5812331B995E6D87C", 00:14:26.130 "uuid": "b03e7896-ce96-40e5-8123-31b995e6d87c" 00:14:26.130 } 00:14:26.130 ] 00:14:26.130 } 00:14:26.130 ] 00:14:26.130 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:26.130 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=610860 00:14:26.130 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:26.130 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:26.130 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:26.130 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:26.130 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:26.130 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:26.130 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:26.130 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:26.130 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.130 Malloc4 00:14:26.130 [2024-07-15 09:23:13.322213] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:26.391 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:26.391 [2024-07-15 09:23:13.476238] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:26.391 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:26.391 Asynchronous Event Request test 00:14:26.391 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:26.391 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:26.391 Registering asynchronous event callbacks... 00:14:26.391 Starting namespace attribute notice tests for all controllers... 00:14:26.391 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:26.391 aer_cb - Changed Namespace 00:14:26.391 Cleaning up... 00:14:26.651 [ 00:14:26.651 { 00:14:26.651 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:26.651 "subtype": "Discovery", 00:14:26.651 "listen_addresses": [], 00:14:26.651 "allow_any_host": true, 00:14:26.651 "hosts": [] 00:14:26.651 }, 00:14:26.651 { 00:14:26.651 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:26.651 "subtype": "NVMe", 00:14:26.651 "listen_addresses": [ 00:14:26.651 { 00:14:26.651 "trtype": "VFIOUSER", 00:14:26.651 "adrfam": "IPv4", 00:14:26.651 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:26.651 "trsvcid": "0" 00:14:26.651 } 00:14:26.651 ], 00:14:26.651 "allow_any_host": true, 00:14:26.651 "hosts": [], 00:14:26.651 "serial_number": "SPDK1", 00:14:26.651 "model_number": "SPDK bdev Controller", 00:14:26.651 "max_namespaces": 32, 00:14:26.651 "min_cntlid": 1, 00:14:26.651 "max_cntlid": 65519, 00:14:26.651 "namespaces": [ 00:14:26.651 { 00:14:26.651 "nsid": 1, 00:14:26.652 "bdev_name": "Malloc1", 00:14:26.652 "name": "Malloc1", 00:14:26.652 "nguid": "AAC0C051E20D4B2C9F570F006938CBDB", 00:14:26.652 "uuid": "aac0c051-e20d-4b2c-9f57-0f006938cbdb" 00:14:26.652 }, 00:14:26.652 { 00:14:26.652 "nsid": 2, 00:14:26.652 "bdev_name": "Malloc3", 00:14:26.652 "name": "Malloc3", 00:14:26.652 "nguid": "348B245517A6403F89C65D9B5B851703", 00:14:26.652 "uuid": "348b2455-17a6-403f-89c6-5d9b5b851703" 00:14:26.652 } 00:14:26.652 ] 00:14:26.652 }, 00:14:26.652 { 00:14:26.652 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:26.652 "subtype": "NVMe", 00:14:26.652 "listen_addresses": [ 00:14:26.652 { 00:14:26.652 "trtype": "VFIOUSER", 00:14:26.652 "adrfam": "IPv4", 00:14:26.652 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:26.652 "trsvcid": "0" 00:14:26.652 } 00:14:26.652 ], 00:14:26.652 "allow_any_host": true, 00:14:26.652 "hosts": [], 00:14:26.652 "serial_number": "SPDK2", 00:14:26.652 "model_number": "SPDK bdev Controller", 00:14:26.652 "max_namespaces": 32, 00:14:26.652 "min_cntlid": 1, 00:14:26.652 "max_cntlid": 65519, 00:14:26.652 "namespaces": [ 00:14:26.652 { 00:14:26.652 "nsid": 1, 00:14:26.652 "bdev_name": "Malloc2", 00:14:26.652 "name": "Malloc2", 00:14:26.652 "nguid": "B03E7896CE9640E5812331B995E6D87C", 00:14:26.652 "uuid": "b03e7896-ce96-40e5-8123-31b995e6d87c" 00:14:26.652 }, 00:14:26.652 { 00:14:26.652 "nsid": 2, 00:14:26.652 "bdev_name": "Malloc4", 00:14:26.652 "name": "Malloc4", 00:14:26.652 "nguid": "24822B5D34C24B0B852F1DA6339DDB44", 00:14:26.652 "uuid": "24822b5d-34c2-4b0b-852f-1da6339ddb44" 00:14:26.652 } 00:14:26.652 ] 00:14:26.652 } 00:14:26.652 ] 00:14:26.652 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 610860 00:14:26.652 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:26.652 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 601862 00:14:26.652 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 601862 ']' 00:14:26.652 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 601862 00:14:26.652 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:14:26.652 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:26.652 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 601862 00:14:26.652 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:26.652 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:26.652 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 601862' 00:14:26.652 killing process with pid 601862 00:14:26.652 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 601862 00:14:26.652 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 601862 00:14:26.913 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:26.913 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:26.913 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:26.913 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:26.913 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:26.913 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=610995 00:14:26.913 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 610995' 00:14:26.913 Process pid: 610995 00:14:26.913 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:26.913 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:26.913 09:23:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 610995 00:14:26.913 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 610995 ']' 00:14:26.913 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.913 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:26.913 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.913 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:26.913 09:23:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:26.913 [2024-07-15 09:23:13.963049] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:26.913 [2024-07-15 09:23:13.963974] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:14:26.913 [2024-07-15 09:23:13.964014] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.913 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.913 [2024-07-15 09:23:14.031038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.913 [2024-07-15 09:23:14.095742] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.913 [2024-07-15 09:23:14.095788] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.913 [2024-07-15 09:23:14.095796] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.913 [2024-07-15 09:23:14.095803] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.913 [2024-07-15 09:23:14.095808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.913 [2024-07-15 09:23:14.095951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.913 [2024-07-15 09:23:14.096171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.913 [2024-07-15 09:23:14.096331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.913 [2024-07-15 09:23:14.096331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.173 [2024-07-15 09:23:14.160958] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:27.173 [2024-07-15 09:23:14.161022] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:27.173 [2024-07-15 09:23:14.161999] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:27.173 [2024-07-15 09:23:14.162374] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:27.173 [2024-07-15 09:23:14.162455] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:27.744 09:23:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:27.744 09:23:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:14:27.744 09:23:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:28.684 09:23:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:28.944 09:23:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:28.944 09:23:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:28.944 09:23:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:28.944 09:23:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:28.944 09:23:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:28.944 Malloc1 00:14:28.944 09:23:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:29.206 09:23:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:29.466 09:23:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:29.466 09:23:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:29.466 09:23:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:29.466 09:23:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:29.726 Malloc2 00:14:29.726 09:23:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:29.984 09:23:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:29.984 09:23:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:30.244 09:23:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:30.244 09:23:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 610995 00:14:30.244 09:23:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 610995 ']' 00:14:30.244 09:23:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 610995 00:14:30.244 09:23:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:14:30.244 09:23:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.244 09:23:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 610995 00:14:30.244 09:23:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:30.244 09:23:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:30.244 09:23:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 610995' 00:14:30.244 killing process with pid 610995 00:14:30.244 09:23:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 610995 00:14:30.244 09:23:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 610995 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:30.504 00:14:30.504 real 0m50.661s 00:14:30.504 user 3m20.611s 00:14:30.504 sys 0m3.195s 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:30.504 ************************************ 00:14:30.504 END TEST nvmf_vfio_user 00:14:30.504 ************************************ 00:14:30.504 09:23:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:30.504 09:23:17 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:30.504 09:23:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:30.504 09:23:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:30.504 09:23:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:30.504 ************************************ 00:14:30.504 START TEST nvmf_vfio_user_nvme_compliance 00:14:30.504 ************************************ 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:30.504 * Looking for test storage... 00:14:30.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.504 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=611745 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 611745' 00:14:30.505 Process pid: 611745 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 611745 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 611745 ']' 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:30.505 09:23:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:30.764 [2024-07-15 09:23:17.745044] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:14:30.764 [2024-07-15 09:23:17.745094] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.764 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.764 [2024-07-15 09:23:17.814921] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:30.764 [2024-07-15 09:23:17.879366] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.764 [2024-07-15 09:23:17.879405] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.764 [2024-07-15 09:23:17.879413] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.764 [2024-07-15 09:23:17.879420] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.764 [2024-07-15 09:23:17.879425] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.764 [2024-07-15 09:23:17.879566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.764 [2024-07-15 09:23:17.879680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.764 [2024-07-15 09:23:17.879684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.703 09:23:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.703 09:23:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:14:31.703 09:23:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:32.643 malloc0 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.643 09:23:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:32.643 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.643 00:14:32.643 00:14:32.643 CUnit - A unit testing framework for C - Version 2.1-3 00:14:32.643 http://cunit.sourceforge.net/ 00:14:32.643 00:14:32.643 00:14:32.643 Suite: nvme_compliance 00:14:32.643 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 09:23:19.818199] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.643 [2024-07-15 09:23:19.819539] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:32.643 [2024-07-15 09:23:19.819550] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:32.643 [2024-07-15 09:23:19.819554] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:32.643 [2024-07-15 09:23:19.821218] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.981 passed 00:14:32.981 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 09:23:19.914793] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.981 [2024-07-15 09:23:19.917809] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.981 passed 00:14:32.981 Test: admin_identify_ns ...[2024-07-15 09:23:20.014293] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.981 [2024-07-15 09:23:20.073770] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:32.981 [2024-07-15 09:23:20.081766] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:32.981 [2024-07-15 09:23:20.102902] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.245 passed 00:14:33.245 Test: admin_get_features_mandatory_features ...[2024-07-15 09:23:20.197929] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.245 [2024-07-15 09:23:20.200954] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.245 passed 00:14:33.245 Test: admin_get_features_optional_features ...[2024-07-15 09:23:20.293499] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.245 [2024-07-15 09:23:20.299528] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.245 passed 00:14:33.245 Test: admin_set_features_number_of_queues ...[2024-07-15 09:23:20.390659] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.506 [2024-07-15 09:23:20.496867] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.506 passed 00:14:33.506 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 09:23:20.588480] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.506 [2024-07-15 09:23:20.591498] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.506 passed 00:14:33.506 Test: admin_get_log_page_with_lpo ...[2024-07-15 09:23:20.685592] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.765 [2024-07-15 09:23:20.752763] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:33.765 [2024-07-15 09:23:20.765809] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.765 passed 00:14:33.765 Test: fabric_property_get ...[2024-07-15 09:23:20.857406] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.765 [2024-07-15 09:23:20.858653] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:33.765 [2024-07-15 09:23:20.860420] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.765 passed 00:14:33.765 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 09:23:20.951999] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.765 [2024-07-15 09:23:20.953242] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:33.765 [2024-07-15 09:23:20.957026] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.024 passed 00:14:34.024 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 09:23:21.050161] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.024 [2024-07-15 09:23:21.133761] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:34.024 [2024-07-15 09:23:21.149767] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:34.024 [2024-07-15 09:23:21.154850] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.024 passed 00:14:34.284 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 09:23:21.246412] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.284 [2024-07-15 09:23:21.247651] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:34.284 [2024-07-15 09:23:21.249434] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.284 passed 00:14:34.284 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 09:23:21.342037] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.284 [2024-07-15 09:23:21.418775] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:34.284 [2024-07-15 09:23:21.442758] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:34.284 [2024-07-15 09:23:21.447860] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.545 passed 00:14:34.545 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 09:23:21.541519] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.545 [2024-07-15 09:23:21.542773] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:34.545 [2024-07-15 09:23:21.542791] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:34.545 [2024-07-15 09:23:21.544542] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.545 passed 00:14:34.545 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 09:23:21.635202] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.545 [2024-07-15 09:23:21.726772] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:34.545 [2024-07-15 09:23:21.734764] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:34.545 [2024-07-15 09:23:21.742762] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:34.832 [2024-07-15 09:23:21.750759] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:34.832 [2024-07-15 09:23:21.779846] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.832 passed 00:14:34.832 Test: admin_create_io_sq_verify_pc ...[2024-07-15 09:23:21.873888] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.832 [2024-07-15 09:23:21.892768] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:34.833 [2024-07-15 09:23:21.910040] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.833 passed 00:14:34.833 Test: admin_create_io_qp_max_qps ...[2024-07-15 09:23:21.998535] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.219 [2024-07-15 09:23:23.115764] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:36.481 [2024-07-15 09:23:23.494293] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.481 passed 00:14:36.481 Test: admin_create_io_sq_shared_cq ...[2024-07-15 09:23:23.587440] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.741 [2024-07-15 09:23:23.719765] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:36.741 [2024-07-15 09:23:23.756819] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.741 passed 00:14:36.741 00:14:36.741 Run Summary: Type Total Ran Passed Failed Inactive 00:14:36.741 suites 1 1 n/a 0 0 00:14:36.741 tests 18 18 18 0 0 00:14:36.741 asserts 360 360 360 0 n/a 00:14:36.741 00:14:36.741 Elapsed time = 1.652 seconds 00:14:36.741 09:23:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 611745 00:14:36.741 09:23:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 611745 ']' 00:14:36.741 09:23:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 611745 00:14:36.741 09:23:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:14:36.741 09:23:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:36.741 09:23:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 611745 00:14:36.741 09:23:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:36.741 09:23:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:36.741 09:23:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 611745' 00:14:36.741 killing process with pid 611745 00:14:36.741 09:23:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 611745 00:14:36.741 09:23:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 611745 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:37.002 00:14:37.002 real 0m6.458s 00:14:37.002 user 0m18.502s 00:14:37.002 sys 0m0.449s 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:37.002 ************************************ 00:14:37.002 END TEST nvmf_vfio_user_nvme_compliance 00:14:37.002 ************************************ 00:14:37.002 09:23:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:37.002 09:23:24 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:37.002 09:23:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:37.002 09:23:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:37.002 09:23:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:37.002 ************************************ 00:14:37.002 START TEST nvmf_vfio_user_fuzz 00:14:37.002 ************************************ 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:37.002 * Looking for test storage... 00:14:37.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.002 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=613143 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 613143' 00:14:37.003 Process pid: 613143 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 613143 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 613143 ']' 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.003 09:23:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:37.947 09:23:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:37.947 09:23:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:14:37.947 09:23:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:38.891 malloc0 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:38.891 09:23:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:11.017 Fuzzing completed. Shutting down the fuzz application 00:15:11.017 00:15:11.017 Dumping successful admin opcodes: 00:15:11.017 8, 9, 10, 24, 00:15:11.017 Dumping successful io opcodes: 00:15:11.017 0, 00:15:11.017 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1110066, total successful commands: 4373, random_seed: 464640704 00:15:11.017 NS: 0x200003a1ef00 admin qp, Total commands completed: 139934, total successful commands: 1132, random_seed: 493380480 00:15:11.017 09:23:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:11.017 09:23:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.017 09:23:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:11.017 09:23:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.017 09:23:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 613143 00:15:11.017 09:23:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 613143 ']' 00:15:11.017 09:23:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 613143 00:15:11.017 09:23:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:15:11.017 09:23:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:11.017 09:23:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 613143 00:15:11.017 09:23:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:11.017 09:23:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:11.017 09:23:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 613143' 00:15:11.017 killing process with pid 613143 00:15:11.017 09:23:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 613143 00:15:11.017 09:23:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 613143 00:15:11.017 09:23:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:11.017 09:23:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:11.017 00:15:11.017 real 0m33.637s 00:15:11.017 user 0m37.522s 00:15:11.017 sys 0m25.905s 00:15:11.017 09:23:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:11.017 09:23:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:11.017 ************************************ 00:15:11.017 END TEST nvmf_vfio_user_fuzz 00:15:11.017 ************************************ 00:15:11.017 09:23:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:11.017 09:23:57 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:11.017 09:23:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:11.017 09:23:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:11.017 09:23:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:11.017 ************************************ 00:15:11.017 START TEST nvmf_host_management 00:15:11.017 ************************************ 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:11.017 * Looking for test storage... 00:15:11.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.017 09:23:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:15:11.018 09:23:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:19.160 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:19.160 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:19.160 Found net devices under 0000:31:00.0: cvl_0_0 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:19.160 Found net devices under 0000:31:00.1: cvl_0_1 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:19.160 09:24:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:19.160 09:24:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:19.160 09:24:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:19.160 09:24:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:19.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:15:19.160 00:15:19.160 --- 10.0.0.2 ping statistics --- 00:15:19.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.160 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:15:19.160 09:24:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:19.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:15:19.160 00:15:19.161 --- 10.0.0.1 ping statistics --- 00:15:19.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.161 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=623915 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 623915 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 623915 ']' 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.161 09:24:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:19.161 [2024-07-15 09:24:06.151387] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:15:19.161 [2024-07-15 09:24:06.151436] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.161 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.161 [2024-07-15 09:24:06.243198] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:19.161 [2024-07-15 09:24:06.310647] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.161 [2024-07-15 09:24:06.310686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.161 [2024-07-15 09:24:06.310694] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.161 [2024-07-15 09:24:06.310700] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.161 [2024-07-15 09:24:06.310706] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.161 [2024-07-15 09:24:06.310740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.161 [2024-07-15 09:24:06.310895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.161 [2024-07-15 09:24:06.311100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:19.161 [2024-07-15 09:24:06.311100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.732 09:24:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.732 09:24:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:15:19.732 09:24:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:19.732 09:24:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:19.732 09:24:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:19.993 09:24:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.993 09:24:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:19.993 09:24:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.993 09:24:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:19.993 [2024-07-15 09:24:06.956198] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.993 09:24:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.993 09:24:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:19.993 09:24:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:19.993 09:24:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:19.994 09:24:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:19.994 09:24:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:15:19.994 09:24:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:15:19.994 09:24:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.994 09:24:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:19.994 Malloc0 00:15:19.994 [2024-07-15 09:24:07.019391] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=624285 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 624285 /var/tmp/bdevperf.sock 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 624285 ']' 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:19.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:19.994 { 00:15:19.994 "params": { 00:15:19.994 "name": "Nvme$subsystem", 00:15:19.994 "trtype": "$TEST_TRANSPORT", 00:15:19.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:19.994 "adrfam": "ipv4", 00:15:19.994 "trsvcid": "$NVMF_PORT", 00:15:19.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:19.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:19.994 "hdgst": ${hdgst:-false}, 00:15:19.994 "ddgst": ${ddgst:-false} 00:15:19.994 }, 00:15:19.994 "method": "bdev_nvme_attach_controller" 00:15:19.994 } 00:15:19.994 EOF 00:15:19.994 )") 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:19.994 09:24:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:19.994 "params": { 00:15:19.994 "name": "Nvme0", 00:15:19.994 "trtype": "tcp", 00:15:19.994 "traddr": "10.0.0.2", 00:15:19.994 "adrfam": "ipv4", 00:15:19.994 "trsvcid": "4420", 00:15:19.994 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:19.994 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:19.994 "hdgst": false, 00:15:19.994 "ddgst": false 00:15:19.994 }, 00:15:19.994 "method": "bdev_nvme_attach_controller" 00:15:19.994 }' 00:15:19.994 [2024-07-15 09:24:07.119913] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:15:19.994 [2024-07-15 09:24:07.119964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid624285 ] 00:15:19.994 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.994 [2024-07-15 09:24:07.185378] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.255 [2024-07-15 09:24:07.250125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.255 Running I/O for 10 seconds... 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=647 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 647 -ge 100 ']' 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.829 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:20.829 [2024-07-15 09:24:07.962370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.829 [2024-07-15 09:24:07.962661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.962839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dbe20 is same with the state(5) to be set 00:15:20.830 [2024-07-15 09:24:07.963175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.830 [2024-07-15 09:24:07.963559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.830 [2024-07-15 09:24:07.963567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.963990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.963997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.964006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.964013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.964022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.964029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.964039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.964046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.964055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.964061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.964071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.964078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.964087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.964094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.831 [2024-07-15 09:24:07.964103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.831 [2024-07-15 09:24:07.964110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.832 [2024-07-15 09:24:07.964123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.832 [2024-07-15 09:24:07.964130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.832 [2024-07-15 09:24:07.964139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.832 [2024-07-15 09:24:07.964146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.832 [2024-07-15 09:24:07.964155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.832 [2024-07-15 09:24:07.964163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.832 [2024-07-15 09:24:07.964172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.832 [2024-07-15 09:24:07.964179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.832 [2024-07-15 09:24:07.964188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.832 [2024-07-15 09:24:07.964195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.832 [2024-07-15 09:24:07.964204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.832 [2024-07-15 09:24:07.964211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.832 [2024-07-15 09:24:07.964220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.832 [2024-07-15 09:24:07.964227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.832 [2024-07-15 09:24:07.964237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.832 [2024-07-15 09:24:07.964244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.832 [2024-07-15 09:24:07.964253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.832 [2024-07-15 09:24:07.964260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.832 [2024-07-15 09:24:07.964269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.832 [2024-07-15 09:24:07.964276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.832 [2024-07-15 09:24:07.964285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a7850 is same with the state(5) to be set 00:15:20.832 [2024-07-15 09:24:07.964327] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11a7850 was disconnected and freed. reset controller. 00:15:20.832 [2024-07-15 09:24:07.965545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:20.832 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.832 task offset: 90624 on job bdev=Nvme0n1 fails 00:15:20.832 00:15:20.832 Latency(us) 00:15:20.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.832 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:20.832 Job: Nvme0n1 ended in about 0.53 seconds with error 00:15:20.832 Verification LBA range: start 0x0 length 0x400 00:15:20.832 Nvme0n1 : 0.53 1338.88 83.68 121.03 0.00 42696.47 3631.79 36700.16 00:15:20.832 =================================================================================================================== 00:15:20.832 Total : 1338.88 83.68 121.03 0.00 42696.47 3631.79 36700.16 00:15:20.832 [2024-07-15 09:24:07.967635] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:20.832 [2024-07-15 09:24:07.967658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd96540 (9): Bad file descriptor 00:15:20.832 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:20.832 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.832 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:20.832 [2024-07-15 09:24:07.972102] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:15:20.832 [2024-07-15 09:24:07.972176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:20.832 [2024-07-15 09:24:07.972197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.832 [2024-07-15 09:24:07.972212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:15:20.832 [2024-07-15 09:24:07.972220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:15:20.832 [2024-07-15 09:24:07.972227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:15:20.832 [2024-07-15 09:24:07.972234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd96540 00:15:20.832 [2024-07-15 09:24:07.972251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd96540 (9): Bad file descriptor 00:15:20.832 [2024-07-15 09:24:07.972263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:20.832 [2024-07-15 09:24:07.972269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:20.832 [2024-07-15 09:24:07.972277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:20.832 [2024-07-15 09:24:07.972289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:20.832 09:24:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.832 09:24:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:15:22.216 09:24:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 624285 00:15:22.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (624285) - No such process 00:15:22.216 09:24:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:15:22.216 09:24:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:22.216 09:24:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:22.216 09:24:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:22.216 09:24:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:22.216 09:24:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:22.216 09:24:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:22.216 09:24:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:22.216 { 00:15:22.216 "params": { 00:15:22.216 "name": "Nvme$subsystem", 00:15:22.216 "trtype": "$TEST_TRANSPORT", 00:15:22.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:22.216 "adrfam": "ipv4", 00:15:22.216 "trsvcid": "$NVMF_PORT", 00:15:22.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:22.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:22.216 "hdgst": ${hdgst:-false}, 00:15:22.216 "ddgst": ${ddgst:-false} 00:15:22.216 }, 00:15:22.216 "method": "bdev_nvme_attach_controller" 00:15:22.216 } 00:15:22.216 EOF 00:15:22.216 )") 00:15:22.216 09:24:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:22.216 09:24:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:22.216 09:24:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:22.216 09:24:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:22.216 "params": { 00:15:22.216 "name": "Nvme0", 00:15:22.216 "trtype": "tcp", 00:15:22.216 "traddr": "10.0.0.2", 00:15:22.216 "adrfam": "ipv4", 00:15:22.216 "trsvcid": "4420", 00:15:22.216 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:22.216 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:22.216 "hdgst": false, 00:15:22.216 "ddgst": false 00:15:22.216 }, 00:15:22.216 "method": "bdev_nvme_attach_controller" 00:15:22.216 }' 00:15:22.216 [2024-07-15 09:24:09.036860] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:15:22.217 [2024-07-15 09:24:09.036914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid624640 ] 00:15:22.217 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.217 [2024-07-15 09:24:09.101248] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.217 [2024-07-15 09:24:09.165114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.477 Running I/O for 1 seconds... 00:15:23.417 00:15:23.417 Latency(us) 00:15:23.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.417 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:23.417 Verification LBA range: start 0x0 length 0x400 00:15:23.417 Nvme0n1 : 1.00 1723.59 107.72 0.00 0.00 36465.58 4642.13 32549.55 00:15:23.417 =================================================================================================================== 00:15:23.417 Total : 1723.59 107.72 0.00 0.00 36465.58 4642.13 32549.55 00:15:23.417 09:24:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:15:23.417 09:24:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:23.417 09:24:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:23.417 09:24:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:23.417 09:24:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:15:23.417 09:24:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:23.417 09:24:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:15:23.417 09:24:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:23.417 09:24:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:15:23.417 09:24:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:23.417 09:24:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:23.417 rmmod nvme_tcp 00:15:23.417 rmmod nvme_fabrics 00:15:23.417 rmmod nvme_keyring 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 623915 ']' 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 623915 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 623915 ']' 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 623915 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 623915 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 623915' 00:15:23.677 killing process with pid 623915 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 623915 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 623915 00:15:23.677 [2024-07-15 09:24:10.797254] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:23.677 09:24:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.221 09:24:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:26.221 09:24:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:15:26.221 00:15:26.221 real 0m15.101s 00:15:26.221 user 0m22.865s 00:15:26.221 sys 0m6.950s 00:15:26.221 09:24:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:26.221 09:24:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:26.221 ************************************ 00:15:26.221 END TEST nvmf_host_management 00:15:26.221 ************************************ 00:15:26.221 09:24:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:26.221 09:24:12 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:26.221 09:24:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:26.221 09:24:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:26.221 09:24:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:26.221 ************************************ 00:15:26.221 START TEST nvmf_lvol 00:15:26.221 ************************************ 00:15:26.221 09:24:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:26.221 * Looking for test storage... 00:15:26.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.221 09:24:13 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:15:26.222 09:24:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:34.396 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:34.397 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:34.397 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:34.397 Found net devices under 0000:31:00.0: cvl_0_0 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:34.397 Found net devices under 0000:31:00.1: cvl_0_1 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:34.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:15:34.397 00:15:34.397 --- 10.0.0.2 ping statistics --- 00:15:34.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.397 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:34.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:15:34.397 00:15:34.397 --- 10.0.0.1 ping statistics --- 00:15:34.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.397 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=630104 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 630104 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 630104 ']' 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:34.397 09:24:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:34.397 [2024-07-15 09:24:20.930698] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:15:34.398 [2024-07-15 09:24:20.930770] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.398 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.398 [2024-07-15 09:24:21.007802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:34.398 [2024-07-15 09:24:21.079596] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.398 [2024-07-15 09:24:21.079635] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.398 [2024-07-15 09:24:21.079643] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.398 [2024-07-15 09:24:21.079649] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.398 [2024-07-15 09:24:21.079655] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.398 [2024-07-15 09:24:21.079725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.398 [2024-07-15 09:24:21.079859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.398 [2024-07-15 09:24:21.080038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.677 09:24:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.677 09:24:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:15:34.677 09:24:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:34.677 09:24:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:34.677 09:24:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:34.677 09:24:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.677 09:24:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:34.938 [2024-07-15 09:24:21.940029] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.938 09:24:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:35.200 09:24:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:35.200 09:24:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:35.200 09:24:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:35.200 09:24:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:35.460 09:24:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:35.460 09:24:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4b36efb2-33b9-4fff-a4b3-a2a5ec1f20b6 00:15:35.460 09:24:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4b36efb2-33b9-4fff-a4b3-a2a5ec1f20b6 lvol 20 00:15:35.721 09:24:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a2a447ca-70d8-42b7-a1b5-d3023e211967 00:15:35.721 09:24:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:35.981 09:24:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a2a447ca-70d8-42b7-a1b5-d3023e211967 00:15:35.981 09:24:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:36.242 [2024-07-15 09:24:23.297426] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.242 09:24:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:36.503 09:24:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=630487 00:15:36.503 09:24:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:36.503 09:24:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:36.503 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.448 09:24:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a2a447ca-70d8-42b7-a1b5-d3023e211967 MY_SNAPSHOT 00:15:37.709 09:24:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6c466a76-4ca4-4926-9d84-ab609458ffdc 00:15:37.709 09:24:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a2a447ca-70d8-42b7-a1b5-d3023e211967 30 00:15:37.971 09:24:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6c466a76-4ca4-4926-9d84-ab609458ffdc MY_CLONE 00:15:37.971 09:24:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=55f9f735-219a-421d-99ca-e2cbd9e2a9c4 00:15:37.971 09:24:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 55f9f735-219a-421d-99ca-e2cbd9e2a9c4 00:15:38.543 09:24:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 630487 00:15:46.687 Initializing NVMe Controllers 00:15:46.687 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:46.687 Controller IO queue size 128, less than required. 00:15:46.687 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:46.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:46.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:46.687 Initialization complete. Launching workers. 00:15:46.687 ======================================================== 00:15:46.687 Latency(us) 00:15:46.687 Device Information : IOPS MiB/s Average min max 00:15:46.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11989.00 46.83 10683.99 1538.63 41097.85 00:15:46.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 18065.10 70.57 7086.34 824.38 62384.75 00:15:46.687 ======================================================== 00:15:46.687 Total : 30054.10 117.40 8521.49 824.38 62384.75 00:15:46.687 00:15:46.687 09:24:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:46.948 09:24:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a2a447ca-70d8-42b7-a1b5-d3023e211967 00:15:46.948 09:24:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4b36efb2-33b9-4fff-a4b3-a2a5ec1f20b6 00:15:47.209 09:24:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:47.209 09:24:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:47.209 09:24:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:47.209 09:24:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:47.209 09:24:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:15:47.209 09:24:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:47.209 09:24:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:15:47.209 09:24:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:47.209 09:24:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:47.209 rmmod nvme_tcp 00:15:47.209 rmmod nvme_fabrics 00:15:47.209 rmmod nvme_keyring 00:15:47.209 09:24:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:47.209 09:24:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:15:47.209 09:24:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:15:47.209 09:24:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 630104 ']' 00:15:47.209 09:24:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 630104 00:15:47.209 09:24:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 630104 ']' 00:15:47.209 09:24:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 630104 00:15:47.209 09:24:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:15:47.209 09:24:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:47.209 09:24:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 630104 00:15:47.470 09:24:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:47.470 09:24:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:47.470 09:24:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 630104' 00:15:47.470 killing process with pid 630104 00:15:47.470 09:24:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 630104 00:15:47.470 09:24:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 630104 00:15:47.470 09:24:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:47.470 09:24:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:47.470 09:24:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:47.470 09:24:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:47.470 09:24:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:47.470 09:24:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.470 09:24:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.470 09:24:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:50.012 00:15:50.012 real 0m23.696s 00:15:50.012 user 1m3.817s 00:15:50.012 sys 0m8.081s 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:50.012 ************************************ 00:15:50.012 END TEST nvmf_lvol 00:15:50.012 ************************************ 00:15:50.012 09:24:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:50.012 09:24:36 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:50.012 09:24:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:50.012 09:24:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.012 09:24:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:50.012 ************************************ 00:15:50.012 START TEST nvmf_lvs_grow 00:15:50.012 ************************************ 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:50.012 * Looking for test storage... 00:15:50.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:15:50.012 09:24:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:58.149 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:58.149 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:58.149 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:58.150 Found net devices under 0000:31:00.0: cvl_0_0 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:58.150 Found net devices under 0000:31:00.1: cvl_0_1 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:58.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:15:58.150 00:15:58.150 --- 10.0.0.2 ping statistics --- 00:15:58.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.150 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:58.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:15:58.150 00:15:58.150 --- 10.0.0.1 ping statistics --- 00:15:58.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.150 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=637380 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 637380 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 637380 ']' 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:58.150 09:24:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:58.150 [2024-07-15 09:24:44.792385] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:15:58.150 [2024-07-15 09:24:44.792446] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.150 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.150 [2024-07-15 09:24:44.870264] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.150 [2024-07-15 09:24:44.944173] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.150 [2024-07-15 09:24:44.944213] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.150 [2024-07-15 09:24:44.944220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.150 [2024-07-15 09:24:44.944227] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.150 [2024-07-15 09:24:44.944232] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.150 [2024-07-15 09:24:44.944250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.410 09:24:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.410 09:24:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:15:58.410 09:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:58.410 09:24:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:58.410 09:24:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:58.410 09:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.410 09:24:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:58.670 [2024-07-15 09:24:45.735242] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.670 09:24:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:58.670 09:24:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:58.670 09:24:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:58.670 09:24:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:58.670 ************************************ 00:15:58.670 START TEST lvs_grow_clean 00:15:58.670 ************************************ 00:15:58.670 09:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:15:58.670 09:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:58.670 09:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:58.670 09:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:58.670 09:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:58.670 09:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:58.670 09:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:58.670 09:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:58.670 09:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:58.670 09:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:58.930 09:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:58.930 09:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:59.190 09:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=50d9c11a-4ebe-4d92-b4d5-751a1bfa16a0 00:15:59.190 09:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d9c11a-4ebe-4d92-b4d5-751a1bfa16a0 00:15:59.190 09:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:59.190 09:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:59.190 09:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:59.190 09:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 50d9c11a-4ebe-4d92-b4d5-751a1bfa16a0 lvol 150 00:15:59.450 09:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2756265c-c529-417d-85f9-7f14d9a3ea8e 00:15:59.450 09:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:59.450 09:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:59.450 [2024-07-15 09:24:46.607836] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:59.450 [2024-07-15 09:24:46.608059] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:59.450 true 00:15:59.450 09:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d9c11a-4ebe-4d92-b4d5-751a1bfa16a0 00:15:59.450 09:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:59.710 09:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:59.710 09:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:59.970 09:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2756265c-c529-417d-85f9-7f14d9a3ea8e 00:15:59.970 09:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:00.229 [2024-07-15 09:24:47.213663] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:00.229 09:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:00.229 09:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=637891 00:16:00.229 09:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:00.229 09:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:00.229 09:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 637891 /var/tmp/bdevperf.sock 00:16:00.229 09:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 637891 ']' 00:16:00.229 09:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:00.229 09:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.229 09:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:00.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:00.229 09:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.229 09:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:00.489 [2024-07-15 09:24:47.434747] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:16:00.489 [2024-07-15 09:24:47.434804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid637891 ] 00:16:00.489 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.489 [2024-07-15 09:24:47.516930] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.489 [2024-07-15 09:24:47.580978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.060 09:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:01.060 09:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:16:01.060 09:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:01.633 Nvme0n1 00:16:01.633 09:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:01.633 [ 00:16:01.633 { 00:16:01.633 "name": "Nvme0n1", 00:16:01.633 "aliases": [ 00:16:01.633 "2756265c-c529-417d-85f9-7f14d9a3ea8e" 00:16:01.633 ], 00:16:01.633 "product_name": "NVMe disk", 00:16:01.633 "block_size": 4096, 00:16:01.633 "num_blocks": 38912, 00:16:01.633 "uuid": "2756265c-c529-417d-85f9-7f14d9a3ea8e", 00:16:01.633 "assigned_rate_limits": { 00:16:01.633 "rw_ios_per_sec": 0, 00:16:01.633 "rw_mbytes_per_sec": 0, 00:16:01.633 "r_mbytes_per_sec": 0, 00:16:01.633 "w_mbytes_per_sec": 0 00:16:01.633 }, 00:16:01.633 "claimed": false, 00:16:01.633 "zoned": false, 00:16:01.633 "supported_io_types": { 00:16:01.633 "read": true, 00:16:01.633 "write": true, 00:16:01.633 "unmap": true, 00:16:01.633 "flush": true, 00:16:01.633 "reset": true, 00:16:01.633 "nvme_admin": true, 00:16:01.633 "nvme_io": true, 00:16:01.633 "nvme_io_md": false, 00:16:01.633 "write_zeroes": true, 00:16:01.633 "zcopy": false, 00:16:01.633 "get_zone_info": false, 00:16:01.633 "zone_management": false, 00:16:01.633 "zone_append": false, 00:16:01.633 "compare": true, 00:16:01.633 "compare_and_write": true, 00:16:01.633 "abort": true, 00:16:01.633 "seek_hole": false, 00:16:01.633 "seek_data": false, 00:16:01.633 "copy": true, 00:16:01.633 "nvme_iov_md": false 00:16:01.633 }, 00:16:01.633 "memory_domains": [ 00:16:01.633 { 00:16:01.633 "dma_device_id": "system", 00:16:01.633 "dma_device_type": 1 00:16:01.633 } 00:16:01.633 ], 00:16:01.633 "driver_specific": { 00:16:01.633 "nvme": [ 00:16:01.633 { 00:16:01.633 "trid": { 00:16:01.633 "trtype": "TCP", 00:16:01.633 "adrfam": "IPv4", 00:16:01.633 "traddr": "10.0.0.2", 00:16:01.633 "trsvcid": "4420", 00:16:01.633 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:01.633 }, 00:16:01.633 "ctrlr_data": { 00:16:01.633 "cntlid": 1, 00:16:01.633 "vendor_id": "0x8086", 00:16:01.633 "model_number": "SPDK bdev Controller", 00:16:01.633 "serial_number": "SPDK0", 00:16:01.633 "firmware_revision": "24.09", 00:16:01.633 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:01.633 "oacs": { 00:16:01.633 "security": 0, 00:16:01.633 "format": 0, 00:16:01.633 "firmware": 0, 00:16:01.633 "ns_manage": 0 00:16:01.633 }, 00:16:01.633 "multi_ctrlr": true, 00:16:01.633 "ana_reporting": false 00:16:01.633 }, 00:16:01.633 "vs": { 00:16:01.633 "nvme_version": "1.3" 00:16:01.633 }, 00:16:01.633 "ns_data": { 00:16:01.633 "id": 1, 00:16:01.633 "can_share": true 00:16:01.633 } 00:16:01.633 } 00:16:01.633 ], 00:16:01.633 "mp_policy": "active_passive" 00:16:01.633 } 00:16:01.633 } 00:16:01.633 ] 00:16:01.633 09:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=638225 00:16:01.633 09:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:01.633 09:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:01.634 Running I/O for 10 seconds... 00:16:03.018 Latency(us) 00:16:03.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:03.018 Nvme0n1 : 1.00 18066.00 70.57 0.00 0.00 0.00 0.00 0.00 00:16:03.018 =================================================================================================================== 00:16:03.018 Total : 18066.00 70.57 0.00 0.00 0.00 0.00 0.00 00:16:03.018 00:16:03.588 09:24:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 50d9c11a-4ebe-4d92-b4d5-751a1bfa16a0 00:16:03.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:03.849 Nvme0n1 : 2.00 18167.00 70.96 0.00 0.00 0.00 0.00 0.00 00:16:03.849 =================================================================================================================== 00:16:03.849 Total : 18167.00 70.96 0.00 0.00 0.00 0.00 0.00 00:16:03.849 00:16:03.849 true 00:16:03.849 09:24:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d9c11a-4ebe-4d92-b4d5-751a1bfa16a0 00:16:03.849 09:24:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:04.109 09:24:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:04.109 09:24:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:04.109 09:24:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 638225 00:16:04.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:04.680 Nvme0n1 : 3.00 18183.67 71.03 0.00 0.00 0.00 0.00 0.00 00:16:04.680 =================================================================================================================== 00:16:04.680 Total : 18183.67 71.03 0.00 0.00 0.00 0.00 0.00 00:16:04.680 00:16:06.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:06.061 Nvme0n1 : 4.00 18242.75 71.26 0.00 0.00 0.00 0.00 0.00 00:16:06.061 =================================================================================================================== 00:16:06.061 Total : 18242.75 71.26 0.00 0.00 0.00 0.00 0.00 00:16:06.061 00:16:07.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:07.003 Nvme0n1 : 5.00 18265.60 71.35 0.00 0.00 0.00 0.00 0.00 00:16:07.003 =================================================================================================================== 00:16:07.003 Total : 18265.60 71.35 0.00 0.00 0.00 0.00 0.00 00:16:07.003 00:16:07.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:07.944 Nvme0n1 : 6.00 18284.17 71.42 0.00 0.00 0.00 0.00 0.00 00:16:07.944 =================================================================================================================== 00:16:07.944 Total : 18284.17 71.42 0.00 0.00 0.00 0.00 0.00 00:16:07.944 00:16:08.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:08.951 Nvme0n1 : 7.00 18287.43 71.44 0.00 0.00 0.00 0.00 0.00 00:16:08.951 =================================================================================================================== 00:16:08.951 Total : 18287.43 71.44 0.00 0.00 0.00 0.00 0.00 00:16:08.951 00:16:09.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:09.913 Nvme0n1 : 8.00 18303.50 71.50 0.00 0.00 0.00 0.00 0.00 00:16:09.913 =================================================================================================================== 00:16:09.913 Total : 18303.50 71.50 0.00 0.00 0.00 0.00 0.00 00:16:09.913 00:16:10.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:10.854 Nvme0n1 : 9.00 18305.33 71.51 0.00 0.00 0.00 0.00 0.00 00:16:10.854 =================================================================================================================== 00:16:10.854 Total : 18305.33 71.51 0.00 0.00 0.00 0.00 0.00 00:16:10.854 00:16:11.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:11.795 Nvme0n1 : 10.00 18314.20 71.54 0.00 0.00 0.00 0.00 0.00 00:16:11.795 =================================================================================================================== 00:16:11.795 Total : 18314.20 71.54 0.00 0.00 0.00 0.00 0.00 00:16:11.795 00:16:11.795 00:16:11.795 Latency(us) 00:16:11.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:11.795 Nvme0n1 : 10.00 18321.74 71.57 0.00 0.00 6983.63 3003.73 12834.13 00:16:11.795 =================================================================================================================== 00:16:11.795 Total : 18321.74 71.57 0.00 0.00 6983.63 3003.73 12834.13 00:16:11.795 0 00:16:11.795 09:24:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 637891 00:16:11.795 09:24:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 637891 ']' 00:16:11.795 09:24:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 637891 00:16:11.795 09:24:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:16:11.795 09:24:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:11.795 09:24:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 637891 00:16:11.795 09:24:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:11.795 09:24:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:11.795 09:24:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 637891' 00:16:11.795 killing process with pid 637891 00:16:11.795 09:24:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 637891 00:16:11.795 Received shutdown signal, test time was about 10.000000 seconds 00:16:11.795 00:16:11.795 Latency(us) 00:16:11.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.795 =================================================================================================================== 00:16:11.796 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:11.796 09:24:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 637891 00:16:12.056 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:12.056 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:12.316 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d9c11a-4ebe-4d92-b4d5-751a1bfa16a0 00:16:12.316 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:12.576 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:12.576 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:12.576 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:12.576 [2024-07-15 09:24:59.693276] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:12.576 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d9c11a-4ebe-4d92-b4d5-751a1bfa16a0 00:16:12.576 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:16:12.576 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d9c11a-4ebe-4d92-b4d5-751a1bfa16a0 00:16:12.576 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:12.576 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:12.576 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:12.576 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:12.576 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:12.576 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:12.576 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:12.576 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:12.576 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d9c11a-4ebe-4d92-b4d5-751a1bfa16a0 00:16:12.835 request: 00:16:12.835 { 00:16:12.835 "uuid": "50d9c11a-4ebe-4d92-b4d5-751a1bfa16a0", 00:16:12.835 "method": "bdev_lvol_get_lvstores", 00:16:12.835 "req_id": 1 00:16:12.835 } 00:16:12.835 Got JSON-RPC error response 00:16:12.835 response: 00:16:12.835 { 00:16:12.835 "code": -19, 00:16:12.835 "message": "No such device" 00:16:12.835 } 00:16:12.835 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:16:12.835 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:12.835 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:12.835 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:12.835 09:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:12.835 aio_bdev 00:16:13.095 09:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2756265c-c529-417d-85f9-7f14d9a3ea8e 00:16:13.095 09:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=2756265c-c529-417d-85f9-7f14d9a3ea8e 00:16:13.095 09:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:13.095 09:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:16:13.095 09:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:13.095 09:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:13.095 09:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:13.095 09:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2756265c-c529-417d-85f9-7f14d9a3ea8e -t 2000 00:16:13.355 [ 00:16:13.355 { 00:16:13.355 "name": "2756265c-c529-417d-85f9-7f14d9a3ea8e", 00:16:13.355 "aliases": [ 00:16:13.355 "lvs/lvol" 00:16:13.355 ], 00:16:13.355 "product_name": "Logical Volume", 00:16:13.355 "block_size": 4096, 00:16:13.355 "num_blocks": 38912, 00:16:13.355 "uuid": "2756265c-c529-417d-85f9-7f14d9a3ea8e", 00:16:13.355 "assigned_rate_limits": { 00:16:13.355 "rw_ios_per_sec": 0, 00:16:13.355 "rw_mbytes_per_sec": 0, 00:16:13.355 "r_mbytes_per_sec": 0, 00:16:13.355 "w_mbytes_per_sec": 0 00:16:13.355 }, 00:16:13.355 "claimed": false, 00:16:13.355 "zoned": false, 00:16:13.355 "supported_io_types": { 00:16:13.355 "read": true, 00:16:13.355 "write": true, 00:16:13.355 "unmap": true, 00:16:13.355 "flush": false, 00:16:13.355 "reset": true, 00:16:13.355 "nvme_admin": false, 00:16:13.355 "nvme_io": false, 00:16:13.355 "nvme_io_md": false, 00:16:13.355 "write_zeroes": true, 00:16:13.355 "zcopy": false, 00:16:13.355 "get_zone_info": false, 00:16:13.355 "zone_management": false, 00:16:13.355 "zone_append": false, 00:16:13.355 "compare": false, 00:16:13.355 "compare_and_write": false, 00:16:13.355 "abort": false, 00:16:13.355 "seek_hole": true, 00:16:13.355 "seek_data": true, 00:16:13.355 "copy": false, 00:16:13.355 "nvme_iov_md": false 00:16:13.355 }, 00:16:13.355 "driver_specific": { 00:16:13.355 "lvol": { 00:16:13.355 "lvol_store_uuid": "50d9c11a-4ebe-4d92-b4d5-751a1bfa16a0", 00:16:13.355 "base_bdev": "aio_bdev", 00:16:13.355 "thin_provision": false, 00:16:13.355 "num_allocated_clusters": 38, 00:16:13.355 "snapshot": false, 00:16:13.355 "clone": false, 00:16:13.355 "esnap_clone": false 00:16:13.355 } 00:16:13.355 } 00:16:13.355 } 00:16:13.355 ] 00:16:13.355 09:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:16:13.355 09:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d9c11a-4ebe-4d92-b4d5-751a1bfa16a0 00:16:13.355 09:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:13.355 09:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:13.355 09:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d9c11a-4ebe-4d92-b4d5-751a1bfa16a0 00:16:13.355 09:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:13.614 09:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:13.614 09:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2756265c-c529-417d-85f9-7f14d9a3ea8e 00:16:13.873 09:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 50d9c11a-4ebe-4d92-b4d5-751a1bfa16a0 00:16:13.873 09:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:14.150 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:14.150 00:16:14.150 real 0m15.373s 00:16:14.150 user 0m15.112s 00:16:14.150 sys 0m1.287s 00:16:14.150 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:14.150 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:14.150 ************************************ 00:16:14.150 END TEST lvs_grow_clean 00:16:14.150 ************************************ 00:16:14.150 09:25:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:16:14.150 09:25:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:14.150 09:25:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:14.150 09:25:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:14.150 09:25:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:14.150 ************************************ 00:16:14.150 START TEST lvs_grow_dirty 00:16:14.150 ************************************ 00:16:14.150 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:16:14.150 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:14.150 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:14.150 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:14.150 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:14.150 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:14.150 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:14.150 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:14.150 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:14.150 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:14.411 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:14.411 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:14.411 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d5f0e2bd-a97b-42fc-b8b9-d1e795ed4ef7 00:16:14.411 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:14.411 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f0e2bd-a97b-42fc-b8b9-d1e795ed4ef7 00:16:14.671 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:14.671 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:14.671 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d5f0e2bd-a97b-42fc-b8b9-d1e795ed4ef7 lvol 150 00:16:14.932 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=084d29ea-8d54-4114-a6f0-5119bd8fdc74 00:16:14.932 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:14.932 09:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:14.932 [2024-07-15 09:25:02.024787] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:14.932 [2024-07-15 09:25:02.024842] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:14.932 true 00:16:14.932 09:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f0e2bd-a97b-42fc-b8b9-d1e795ed4ef7 00:16:14.932 09:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:15.192 09:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:15.192 09:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:15.192 09:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 084d29ea-8d54-4114-a6f0-5119bd8fdc74 00:16:15.451 09:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:15.451 [2024-07-15 09:25:02.622589] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.451 09:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:15.710 09:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=640971 00:16:15.710 09:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:15.711 09:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:15.711 09:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 640971 /var/tmp/bdevperf.sock 00:16:15.711 09:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 640971 ']' 00:16:15.711 09:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:15.711 09:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:15.711 09:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:15.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:15.711 09:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:15.711 09:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:15.711 [2024-07-15 09:25:02.836714] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:16:15.711 [2024-07-15 09:25:02.836769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid640971 ] 00:16:15.711 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.970 [2024-07-15 09:25:02.915424] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.970 [2024-07-15 09:25:02.969693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.540 09:25:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:16.540 09:25:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:16:16.540 09:25:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:16.800 Nvme0n1 00:16:16.800 09:25:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:17.061 [ 00:16:17.061 { 00:16:17.061 "name": "Nvme0n1", 00:16:17.061 "aliases": [ 00:16:17.061 "084d29ea-8d54-4114-a6f0-5119bd8fdc74" 00:16:17.061 ], 00:16:17.061 "product_name": "NVMe disk", 00:16:17.061 "block_size": 4096, 00:16:17.061 "num_blocks": 38912, 00:16:17.061 "uuid": "084d29ea-8d54-4114-a6f0-5119bd8fdc74", 00:16:17.061 "assigned_rate_limits": { 00:16:17.061 "rw_ios_per_sec": 0, 00:16:17.061 "rw_mbytes_per_sec": 0, 00:16:17.061 "r_mbytes_per_sec": 0, 00:16:17.061 "w_mbytes_per_sec": 0 00:16:17.061 }, 00:16:17.061 "claimed": false, 00:16:17.061 "zoned": false, 00:16:17.061 "supported_io_types": { 00:16:17.061 "read": true, 00:16:17.061 "write": true, 00:16:17.061 "unmap": true, 00:16:17.061 "flush": true, 00:16:17.061 "reset": true, 00:16:17.061 "nvme_admin": true, 00:16:17.061 "nvme_io": true, 00:16:17.061 "nvme_io_md": false, 00:16:17.061 "write_zeroes": true, 00:16:17.061 "zcopy": false, 00:16:17.061 "get_zone_info": false, 00:16:17.061 "zone_management": false, 00:16:17.061 "zone_append": false, 00:16:17.061 "compare": true, 00:16:17.061 "compare_and_write": true, 00:16:17.061 "abort": true, 00:16:17.061 "seek_hole": false, 00:16:17.061 "seek_data": false, 00:16:17.061 "copy": true, 00:16:17.061 "nvme_iov_md": false 00:16:17.061 }, 00:16:17.061 "memory_domains": [ 00:16:17.061 { 00:16:17.061 "dma_device_id": "system", 00:16:17.061 "dma_device_type": 1 00:16:17.061 } 00:16:17.061 ], 00:16:17.061 "driver_specific": { 00:16:17.061 "nvme": [ 00:16:17.061 { 00:16:17.061 "trid": { 00:16:17.061 "trtype": "TCP", 00:16:17.061 "adrfam": "IPv4", 00:16:17.061 "traddr": "10.0.0.2", 00:16:17.061 "trsvcid": "4420", 00:16:17.061 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:17.061 }, 00:16:17.061 "ctrlr_data": { 00:16:17.061 "cntlid": 1, 00:16:17.061 "vendor_id": "0x8086", 00:16:17.061 "model_number": "SPDK bdev Controller", 00:16:17.061 "serial_number": "SPDK0", 00:16:17.061 "firmware_revision": "24.09", 00:16:17.061 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:17.061 "oacs": { 00:16:17.061 "security": 0, 00:16:17.061 "format": 0, 00:16:17.061 "firmware": 0, 00:16:17.061 "ns_manage": 0 00:16:17.061 }, 00:16:17.061 "multi_ctrlr": true, 00:16:17.061 "ana_reporting": false 00:16:17.061 }, 00:16:17.061 "vs": { 00:16:17.061 "nvme_version": "1.3" 00:16:17.061 }, 00:16:17.061 "ns_data": { 00:16:17.061 "id": 1, 00:16:17.061 "can_share": true 00:16:17.061 } 00:16:17.061 } 00:16:17.061 ], 00:16:17.061 "mp_policy": "active_passive" 00:16:17.061 } 00:16:17.061 } 00:16:17.061 ] 00:16:17.061 09:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=641305 00:16:17.061 09:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:17.061 09:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:17.061 Running I/O for 10 seconds... 00:16:18.446 Latency(us) 00:16:18.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:18.446 Nvme0n1 : 1.00 17679.00 69.06 0.00 0.00 0.00 0.00 0.00 00:16:18.446 =================================================================================================================== 00:16:18.446 Total : 17679.00 69.06 0.00 0.00 0.00 0.00 0.00 00:16:18.446 00:16:19.016 09:25:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d5f0e2bd-a97b-42fc-b8b9-d1e795ed4ef7 00:16:19.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:19.277 Nvme0n1 : 2.00 17751.50 69.34 0.00 0.00 0.00 0.00 0.00 00:16:19.277 =================================================================================================================== 00:16:19.277 Total : 17751.50 69.34 0.00 0.00 0.00 0.00 0.00 00:16:19.277 00:16:19.277 true 00:16:19.277 09:25:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f0e2bd-a97b-42fc-b8b9-d1e795ed4ef7 00:16:19.277 09:25:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:19.277 09:25:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:19.277 09:25:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:19.277 09:25:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 641305 00:16:20.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:20.217 Nvme0n1 : 3.00 17791.67 69.50 0.00 0.00 0.00 0.00 0.00 00:16:20.217 =================================================================================================================== 00:16:20.217 Total : 17791.67 69.50 0.00 0.00 0.00 0.00 0.00 00:16:20.217 00:16:21.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:21.158 Nvme0n1 : 4.00 17823.75 69.62 0.00 0.00 0.00 0.00 0.00 00:16:21.158 =================================================================================================================== 00:16:21.158 Total : 17823.75 69.62 0.00 0.00 0.00 0.00 0.00 00:16:21.158 00:16:22.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:22.100 Nvme0n1 : 5.00 17847.80 69.72 0.00 0.00 0.00 0.00 0.00 00:16:22.100 =================================================================================================================== 00:16:22.100 Total : 17847.80 69.72 0.00 0.00 0.00 0.00 0.00 00:16:22.100 00:16:23.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:23.481 Nvme0n1 : 6.00 17850.50 69.73 0.00 0.00 0.00 0.00 0.00 00:16:23.481 =================================================================================================================== 00:16:23.481 Total : 17850.50 69.73 0.00 0.00 0.00 0.00 0.00 00:16:23.481 00:16:24.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:24.052 Nvme0n1 : 7.00 17865.00 69.79 0.00 0.00 0.00 0.00 0.00 00:16:24.052 =================================================================================================================== 00:16:24.052 Total : 17865.00 69.79 0.00 0.00 0.00 0.00 0.00 00:16:24.052 00:16:25.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:25.434 Nvme0n1 : 8.00 17879.88 69.84 0.00 0.00 0.00 0.00 0.00 00:16:25.434 =================================================================================================================== 00:16:25.434 Total : 17879.88 69.84 0.00 0.00 0.00 0.00 0.00 00:16:25.434 00:16:26.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:26.373 Nvme0n1 : 9.00 17892.33 69.89 0.00 0.00 0.00 0.00 0.00 00:16:26.373 =================================================================================================================== 00:16:26.373 Total : 17892.33 69.89 0.00 0.00 0.00 0.00 0.00 00:16:26.373 00:16:27.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:27.313 Nvme0n1 : 10.00 17903.10 69.93 0.00 0.00 0.00 0.00 0.00 00:16:27.313 =================================================================================================================== 00:16:27.313 Total : 17903.10 69.93 0.00 0.00 0.00 0.00 0.00 00:16:27.313 00:16:27.313 00:16:27.313 Latency(us) 00:16:27.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:27.313 Nvme0n1 : 10.01 17903.21 69.93 0.00 0.00 7145.04 4096.00 11632.64 00:16:27.313 =================================================================================================================== 00:16:27.313 Total : 17903.21 69.93 0.00 0.00 7145.04 4096.00 11632.64 00:16:27.313 0 00:16:27.313 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 640971 00:16:27.313 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 640971 ']' 00:16:27.313 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 640971 00:16:27.313 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:16:27.313 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:27.313 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 640971 00:16:27.313 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:27.313 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:27.313 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 640971' 00:16:27.313 killing process with pid 640971 00:16:27.313 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 640971 00:16:27.313 Received shutdown signal, test time was about 10.000000 seconds 00:16:27.313 00:16:27.313 Latency(us) 00:16:27.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.313 =================================================================================================================== 00:16:27.313 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:27.313 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 640971 00:16:27.313 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:27.574 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:27.574 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f0e2bd-a97b-42fc-b8b9-d1e795ed4ef7 00:16:27.574 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:27.834 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:27.834 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:27.834 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 637380 00:16:27.834 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 637380 00:16:27.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 637380 Killed "${NVMF_APP[@]}" "$@" 00:16:27.834 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:27.834 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:27.834 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:27.834 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:27.834 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:27.834 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=643334 00:16:27.834 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 643334 00:16:27.834 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:27.834 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 643334 ']' 00:16:27.834 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.834 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:27.834 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.834 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:27.834 09:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:27.834 [2024-07-15 09:25:15.027996] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:16:27.834 [2024-07-15 09:25:15.028054] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.095 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.095 [2024-07-15 09:25:15.101455] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.095 [2024-07-15 09:25:15.168135] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.095 [2024-07-15 09:25:15.168187] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.095 [2024-07-15 09:25:15.168195] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.095 [2024-07-15 09:25:15.168201] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.095 [2024-07-15 09:25:15.168206] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.095 [2024-07-15 09:25:15.168224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.666 09:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.666 09:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:16:28.666 09:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:28.666 09:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:28.666 09:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:28.666 09:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.666 09:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:28.926 [2024-07-15 09:25:15.956848] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:28.926 [2024-07-15 09:25:15.956935] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:28.926 [2024-07-15 09:25:15.956964] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:28.926 09:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:28.926 09:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 084d29ea-8d54-4114-a6f0-5119bd8fdc74 00:16:28.926 09:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=084d29ea-8d54-4114-a6f0-5119bd8fdc74 00:16:28.926 09:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:28.926 09:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:28.926 09:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:28.926 09:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:28.926 09:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:29.186 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 084d29ea-8d54-4114-a6f0-5119bd8fdc74 -t 2000 00:16:29.187 [ 00:16:29.187 { 00:16:29.187 "name": "084d29ea-8d54-4114-a6f0-5119bd8fdc74", 00:16:29.187 "aliases": [ 00:16:29.187 "lvs/lvol" 00:16:29.187 ], 00:16:29.187 "product_name": "Logical Volume", 00:16:29.187 "block_size": 4096, 00:16:29.187 "num_blocks": 38912, 00:16:29.187 "uuid": "084d29ea-8d54-4114-a6f0-5119bd8fdc74", 00:16:29.187 "assigned_rate_limits": { 00:16:29.187 "rw_ios_per_sec": 0, 00:16:29.187 "rw_mbytes_per_sec": 0, 00:16:29.187 "r_mbytes_per_sec": 0, 00:16:29.187 "w_mbytes_per_sec": 0 00:16:29.187 }, 00:16:29.187 "claimed": false, 00:16:29.187 "zoned": false, 00:16:29.187 "supported_io_types": { 00:16:29.187 "read": true, 00:16:29.187 "write": true, 00:16:29.187 "unmap": true, 00:16:29.187 "flush": false, 00:16:29.187 "reset": true, 00:16:29.187 "nvme_admin": false, 00:16:29.187 "nvme_io": false, 00:16:29.187 "nvme_io_md": false, 00:16:29.187 "write_zeroes": true, 00:16:29.187 "zcopy": false, 00:16:29.187 "get_zone_info": false, 00:16:29.187 "zone_management": false, 00:16:29.187 "zone_append": false, 00:16:29.187 "compare": false, 00:16:29.187 "compare_and_write": false, 00:16:29.187 "abort": false, 00:16:29.187 "seek_hole": true, 00:16:29.187 "seek_data": true, 00:16:29.187 "copy": false, 00:16:29.187 "nvme_iov_md": false 00:16:29.187 }, 00:16:29.187 "driver_specific": { 00:16:29.187 "lvol": { 00:16:29.187 "lvol_store_uuid": "d5f0e2bd-a97b-42fc-b8b9-d1e795ed4ef7", 00:16:29.187 "base_bdev": "aio_bdev", 00:16:29.187 "thin_provision": false, 00:16:29.187 "num_allocated_clusters": 38, 00:16:29.187 "snapshot": false, 00:16:29.187 "clone": false, 00:16:29.187 "esnap_clone": false 00:16:29.187 } 00:16:29.187 } 00:16:29.187 } 00:16:29.187 ] 00:16:29.187 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:29.187 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f0e2bd-a97b-42fc-b8b9-d1e795ed4ef7 00:16:29.187 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:16:29.447 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:16:29.447 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f0e2bd-a97b-42fc-b8b9-d1e795ed4ef7 00:16:29.447 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:16:29.447 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:16:29.447 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:29.708 [2024-07-15 09:25:16.720750] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:29.708 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f0e2bd-a97b-42fc-b8b9-d1e795ed4ef7 00:16:29.708 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:16:29.708 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f0e2bd-a97b-42fc-b8b9-d1e795ed4ef7 00:16:29.708 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:29.708 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:29.708 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:29.708 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:29.708 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:29.708 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:29.708 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:29.708 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:29.708 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f0e2bd-a97b-42fc-b8b9-d1e795ed4ef7 00:16:29.708 request: 00:16:29.708 { 00:16:29.708 "uuid": "d5f0e2bd-a97b-42fc-b8b9-d1e795ed4ef7", 00:16:29.708 "method": "bdev_lvol_get_lvstores", 00:16:29.708 "req_id": 1 00:16:29.708 } 00:16:29.708 Got JSON-RPC error response 00:16:29.708 response: 00:16:29.708 { 00:16:29.708 "code": -19, 00:16:29.708 "message": "No such device" 00:16:29.708 } 00:16:29.708 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:16:29.708 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:29.708 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:29.708 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:29.708 09:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:29.969 aio_bdev 00:16:29.969 09:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 084d29ea-8d54-4114-a6f0-5119bd8fdc74 00:16:29.969 09:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=084d29ea-8d54-4114-a6f0-5119bd8fdc74 00:16:29.969 09:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:29.969 09:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:29.969 09:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:29.969 09:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:29.969 09:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:30.229 09:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 084d29ea-8d54-4114-a6f0-5119bd8fdc74 -t 2000 00:16:30.229 [ 00:16:30.229 { 00:16:30.229 "name": "084d29ea-8d54-4114-a6f0-5119bd8fdc74", 00:16:30.229 "aliases": [ 00:16:30.229 "lvs/lvol" 00:16:30.229 ], 00:16:30.229 "product_name": "Logical Volume", 00:16:30.229 "block_size": 4096, 00:16:30.229 "num_blocks": 38912, 00:16:30.229 "uuid": "084d29ea-8d54-4114-a6f0-5119bd8fdc74", 00:16:30.229 "assigned_rate_limits": { 00:16:30.229 "rw_ios_per_sec": 0, 00:16:30.229 "rw_mbytes_per_sec": 0, 00:16:30.229 "r_mbytes_per_sec": 0, 00:16:30.229 "w_mbytes_per_sec": 0 00:16:30.229 }, 00:16:30.229 "claimed": false, 00:16:30.229 "zoned": false, 00:16:30.229 "supported_io_types": { 00:16:30.229 "read": true, 00:16:30.229 "write": true, 00:16:30.229 "unmap": true, 00:16:30.229 "flush": false, 00:16:30.229 "reset": true, 00:16:30.229 "nvme_admin": false, 00:16:30.229 "nvme_io": false, 00:16:30.229 "nvme_io_md": false, 00:16:30.229 "write_zeroes": true, 00:16:30.229 "zcopy": false, 00:16:30.229 "get_zone_info": false, 00:16:30.229 "zone_management": false, 00:16:30.229 "zone_append": false, 00:16:30.229 "compare": false, 00:16:30.229 "compare_and_write": false, 00:16:30.229 "abort": false, 00:16:30.229 "seek_hole": true, 00:16:30.229 "seek_data": true, 00:16:30.229 "copy": false, 00:16:30.229 "nvme_iov_md": false 00:16:30.229 }, 00:16:30.229 "driver_specific": { 00:16:30.229 "lvol": { 00:16:30.229 "lvol_store_uuid": "d5f0e2bd-a97b-42fc-b8b9-d1e795ed4ef7", 00:16:30.229 "base_bdev": "aio_bdev", 00:16:30.229 "thin_provision": false, 00:16:30.229 "num_allocated_clusters": 38, 00:16:30.229 "snapshot": false, 00:16:30.229 "clone": false, 00:16:30.230 "esnap_clone": false 00:16:30.230 } 00:16:30.230 } 00:16:30.230 } 00:16:30.230 ] 00:16:30.230 09:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:30.230 09:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f0e2bd-a97b-42fc-b8b9-d1e795ed4ef7 00:16:30.230 09:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:30.490 09:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:30.490 09:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f0e2bd-a97b-42fc-b8b9-d1e795ed4ef7 00:16:30.490 09:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:30.490 09:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:30.490 09:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 084d29ea-8d54-4114-a6f0-5119bd8fdc74 00:16:30.750 09:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d5f0e2bd-a97b-42fc-b8b9-d1e795ed4ef7 00:16:31.011 09:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:31.011 09:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:31.011 00:16:31.011 real 0m16.927s 00:16:31.011 user 0m44.511s 00:16:31.011 sys 0m2.916s 00:16:31.011 09:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:31.011 09:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:31.011 ************************************ 00:16:31.011 END TEST lvs_grow_dirty 00:16:31.011 ************************************ 00:16:31.011 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:16:31.011 09:25:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:31.011 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:16:31.011 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:16:31.011 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:31.011 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:31.011 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:31.271 nvmf_trace.0 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:31.271 rmmod nvme_tcp 00:16:31.271 rmmod nvme_fabrics 00:16:31.271 rmmod nvme_keyring 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 643334 ']' 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 643334 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 643334 ']' 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 643334 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 643334 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 643334' 00:16:31.271 killing process with pid 643334 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 643334 00:16:31.271 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 643334 00:16:31.531 09:25:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:31.531 09:25:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:31.531 09:25:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:31.531 09:25:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:31.531 09:25:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:31.532 09:25:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.532 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.532 09:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.551 09:25:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:33.551 00:16:33.551 real 0m43.837s 00:16:33.551 user 1m5.677s 00:16:33.551 sys 0m10.434s 00:16:33.551 09:25:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:33.551 09:25:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:33.551 ************************************ 00:16:33.551 END TEST nvmf_lvs_grow 00:16:33.551 ************************************ 00:16:33.551 09:25:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:33.551 09:25:20 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:33.551 09:25:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:33.551 09:25:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:33.551 09:25:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:33.551 ************************************ 00:16:33.551 START TEST nvmf_bdev_io_wait 00:16:33.552 ************************************ 00:16:33.552 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:33.552 * Looking for test storage... 00:16:33.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:16:33.813 09:25:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:41.954 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:41.955 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:41.955 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:41.955 Found net devices under 0000:31:00.0: cvl_0_0 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:41.955 Found net devices under 0000:31:00.1: cvl_0_1 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:41.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:16:41.955 00:16:41.955 --- 10.0.0.2 ping statistics --- 00:16:41.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.955 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:41.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:16:41.955 00:16:41.955 --- 10.0.0.1 ping statistics --- 00:16:41.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.955 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=648735 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 648735 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 648735 ']' 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.955 09:25:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:41.955 [2024-07-15 09:25:28.757617] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:16:41.956 [2024-07-15 09:25:28.757665] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.956 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.956 [2024-07-15 09:25:28.831695] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:41.956 [2024-07-15 09:25:28.897821] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.956 [2024-07-15 09:25:28.897856] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.956 [2024-07-15 09:25:28.897863] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.956 [2024-07-15 09:25:28.897870] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.956 [2024-07-15 09:25:28.897875] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.956 [2024-07-15 09:25:28.898010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.956 [2024-07-15 09:25:28.898144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.956 [2024-07-15 09:25:28.898301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.956 [2024-07-15 09:25:28.898301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:42.528 [2024-07-15 09:25:29.622252] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:42.528 Malloc0 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:42.528 [2024-07-15 09:25:29.694012] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=648841 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=648843 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:42.528 { 00:16:42.528 "params": { 00:16:42.528 "name": "Nvme$subsystem", 00:16:42.528 "trtype": "$TEST_TRANSPORT", 00:16:42.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.528 "adrfam": "ipv4", 00:16:42.528 "trsvcid": "$NVMF_PORT", 00:16:42.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.528 "hdgst": ${hdgst:-false}, 00:16:42.528 "ddgst": ${ddgst:-false} 00:16:42.528 }, 00:16:42.528 "method": "bdev_nvme_attach_controller" 00:16:42.528 } 00:16:42.528 EOF 00:16:42.528 )") 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=648846 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=648850 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:42.528 { 00:16:42.528 "params": { 00:16:42.528 "name": "Nvme$subsystem", 00:16:42.528 "trtype": "$TEST_TRANSPORT", 00:16:42.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.528 "adrfam": "ipv4", 00:16:42.528 "trsvcid": "$NVMF_PORT", 00:16:42.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.528 "hdgst": ${hdgst:-false}, 00:16:42.528 "ddgst": ${ddgst:-false} 00:16:42.528 }, 00:16:42.528 "method": "bdev_nvme_attach_controller" 00:16:42.528 } 00:16:42.528 EOF 00:16:42.528 )") 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:42.528 { 00:16:42.528 "params": { 00:16:42.528 "name": "Nvme$subsystem", 00:16:42.528 "trtype": "$TEST_TRANSPORT", 00:16:42.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.528 "adrfam": "ipv4", 00:16:42.528 "trsvcid": "$NVMF_PORT", 00:16:42.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.528 "hdgst": ${hdgst:-false}, 00:16:42.528 "ddgst": ${ddgst:-false} 00:16:42.528 }, 00:16:42.528 "method": "bdev_nvme_attach_controller" 00:16:42.528 } 00:16:42.528 EOF 00:16:42.528 )") 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:42.528 { 00:16:42.528 "params": { 00:16:42.528 "name": "Nvme$subsystem", 00:16:42.528 "trtype": "$TEST_TRANSPORT", 00:16:42.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.528 "adrfam": "ipv4", 00:16:42.528 "trsvcid": "$NVMF_PORT", 00:16:42.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.528 "hdgst": ${hdgst:-false}, 00:16:42.528 "ddgst": ${ddgst:-false} 00:16:42.528 }, 00:16:42.528 "method": "bdev_nvme_attach_controller" 00:16:42.528 } 00:16:42.528 EOF 00:16:42.528 )") 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 648841 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:42.528 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:42.528 "params": { 00:16:42.528 "name": "Nvme1", 00:16:42.528 "trtype": "tcp", 00:16:42.528 "traddr": "10.0.0.2", 00:16:42.528 "adrfam": "ipv4", 00:16:42.528 "trsvcid": "4420", 00:16:42.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:42.528 "hdgst": false, 00:16:42.528 "ddgst": false 00:16:42.529 }, 00:16:42.529 "method": "bdev_nvme_attach_controller" 00:16:42.529 }' 00:16:42.529 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:42.529 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:42.529 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:42.529 "params": { 00:16:42.529 "name": "Nvme1", 00:16:42.529 "trtype": "tcp", 00:16:42.529 "traddr": "10.0.0.2", 00:16:42.529 "adrfam": "ipv4", 00:16:42.529 "trsvcid": "4420", 00:16:42.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:42.529 "hdgst": false, 00:16:42.529 "ddgst": false 00:16:42.529 }, 00:16:42.529 "method": "bdev_nvme_attach_controller" 00:16:42.529 }' 00:16:42.529 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:42.529 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:42.529 "params": { 00:16:42.529 "name": "Nvme1", 00:16:42.529 "trtype": "tcp", 00:16:42.529 "traddr": "10.0.0.2", 00:16:42.529 "adrfam": "ipv4", 00:16:42.529 "trsvcid": "4420", 00:16:42.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:42.529 "hdgst": false, 00:16:42.529 "ddgst": false 00:16:42.529 }, 00:16:42.529 "method": "bdev_nvme_attach_controller" 00:16:42.529 }' 00:16:42.529 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:42.529 09:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:42.529 "params": { 00:16:42.529 "name": "Nvme1", 00:16:42.529 "trtype": "tcp", 00:16:42.529 "traddr": "10.0.0.2", 00:16:42.529 "adrfam": "ipv4", 00:16:42.529 "trsvcid": "4420", 00:16:42.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:42.529 "hdgst": false, 00:16:42.529 "ddgst": false 00:16:42.529 }, 00:16:42.529 "method": "bdev_nvme_attach_controller" 00:16:42.529 }' 00:16:42.790 [2024-07-15 09:25:29.745771] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:16:42.790 [2024-07-15 09:25:29.745827] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:42.790 [2024-07-15 09:25:29.749904] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:16:42.790 [2024-07-15 09:25:29.749949] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:42.790 [2024-07-15 09:25:29.750621] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:16:42.790 [2024-07-15 09:25:29.750668] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:42.791 [2024-07-15 09:25:29.751168] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:16:42.791 [2024-07-15 09:25:29.751213] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:42.791 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.791 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.791 [2024-07-15 09:25:29.901051] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.791 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.791 [2024-07-15 09:25:29.951977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:42.791 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.791 [2024-07-15 09:25:29.957047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.052 [2024-07-15 09:25:30.006501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.052 [2024-07-15 09:25:30.008459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:43.052 [2024-07-15 09:25:30.054770] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.052 [2024-07-15 09:25:30.059537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:16:43.052 [2024-07-15 09:25:30.105322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:43.052 Running I/O for 1 seconds... 00:16:43.052 Running I/O for 1 seconds... 00:16:43.313 Running I/O for 1 seconds... 00:16:43.313 Running I/O for 1 seconds... 00:16:44.255 00:16:44.255 Latency(us) 00:16:44.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.255 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:44.255 Nvme1n1 : 1.00 187473.01 732.32 0.00 0.00 679.68 273.07 733.87 00:16:44.255 =================================================================================================================== 00:16:44.255 Total : 187473.01 732.32 0.00 0.00 679.68 273.07 733.87 00:16:44.255 00:16:44.255 Latency(us) 00:16:44.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.255 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:44.255 Nvme1n1 : 1.01 8033.32 31.38 0.00 0.00 15797.05 6335.15 27197.44 00:16:44.255 =================================================================================================================== 00:16:44.255 Total : 8033.32 31.38 0.00 0.00 15797.05 6335.15 27197.44 00:16:44.255 00:16:44.255 Latency(us) 00:16:44.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.255 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:44.255 Nvme1n1 : 1.00 18988.03 74.17 0.00 0.00 6721.07 4669.44 18240.85 00:16:44.255 =================================================================================================================== 00:16:44.255 Total : 18988.03 74.17 0.00 0.00 6721.07 4669.44 18240.85 00:16:44.255 00:16:44.255 Latency(us) 00:16:44.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.255 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:44.255 Nvme1n1 : 1.00 7648.89 29.88 0.00 0.00 16689.79 4341.76 39321.60 00:16:44.255 =================================================================================================================== 00:16:44.255 Total : 7648.89 29.88 0.00 0.00 16689.79 4341.76 39321.60 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 648843 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 648846 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 648850 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:44.516 rmmod nvme_tcp 00:16:44.516 rmmod nvme_fabrics 00:16:44.516 rmmod nvme_keyring 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 648735 ']' 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 648735 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 648735 ']' 00:16:44.516 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 648735 00:16:44.517 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:16:44.517 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:44.517 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 648735 00:16:44.517 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:44.517 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:44.517 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 648735' 00:16:44.517 killing process with pid 648735 00:16:44.517 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 648735 00:16:44.517 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 648735 00:16:44.517 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:44.517 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:44.517 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:44.517 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.517 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:44.777 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.777 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.777 09:25:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.691 09:25:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:46.691 00:16:46.691 real 0m13.135s 00:16:46.691 user 0m18.810s 00:16:46.691 sys 0m7.168s 00:16:46.691 09:25:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:46.691 09:25:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:46.691 ************************************ 00:16:46.691 END TEST nvmf_bdev_io_wait 00:16:46.691 ************************************ 00:16:46.691 09:25:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:46.691 09:25:33 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:46.691 09:25:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:46.691 09:25:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:46.691 09:25:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:46.691 ************************************ 00:16:46.691 START TEST nvmf_queue_depth 00:16:46.691 ************************************ 00:16:46.691 09:25:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:46.953 * Looking for test storage... 00:16:46.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.953 09:25:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.953 09:25:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:46.953 09:25:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:46.953 09:25:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:16:46.953 09:25:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:55.089 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:55.089 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:55.089 Found net devices under 0000:31:00.0: cvl_0_0 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:55.089 Found net devices under 0000:31:00.1: cvl_0_1 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:55.089 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:55.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:55.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:16:55.090 00:16:55.090 --- 10.0.0.2 ping statistics --- 00:16:55.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.090 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:55.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:55.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.428 ms 00:16:55.090 00:16:55.090 --- 10.0.0.1 ping statistics --- 00:16:55.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.090 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=653864 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 653864 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 653864 ']' 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:55.090 09:25:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:55.090 [2024-07-15 09:25:41.978368] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:16:55.090 [2024-07-15 09:25:41.978431] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.090 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.090 [2024-07-15 09:25:42.074635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.090 [2024-07-15 09:25:42.168626] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.090 [2024-07-15 09:25:42.168682] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.090 [2024-07-15 09:25:42.168690] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:55.090 [2024-07-15 09:25:42.168697] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:55.090 [2024-07-15 09:25:42.168703] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.090 [2024-07-15 09:25:42.168734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:55.661 [2024-07-15 09:25:42.812860] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:55.661 Malloc0 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.661 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:55.921 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.921 09:25:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.921 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.921 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:55.921 [2024-07-15 09:25:42.866633] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.921 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.921 09:25:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=654155 00:16:55.921 09:25:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:55.921 09:25:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:55.921 09:25:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 654155 /var/tmp/bdevperf.sock 00:16:55.921 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 654155 ']' 00:16:55.921 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:55.921 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:55.921 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:55.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:55.921 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:55.921 09:25:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:55.921 [2024-07-15 09:25:42.922203] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:16:55.921 [2024-07-15 09:25:42.922265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid654155 ] 00:16:55.921 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.921 [2024-07-15 09:25:42.994858] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.921 [2024-07-15 09:25:43.069241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.860 09:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:56.860 09:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:56.860 09:25:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:56.860 09:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.860 09:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:56.860 NVMe0n1 00:16:56.860 09:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.860 09:25:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:56.860 Running I/O for 10 seconds... 00:17:06.847 00:17:06.847 Latency(us) 00:17:06.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.847 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:06.847 Verification LBA range: start 0x0 length 0x4000 00:17:06.847 NVMe0n1 : 10.07 11688.00 45.66 0.00 0.00 87297.09 24357.55 64662.19 00:17:06.847 =================================================================================================================== 00:17:06.847 Total : 11688.00 45.66 0.00 0.00 87297.09 24357.55 64662.19 00:17:06.847 0 00:17:06.847 09:25:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 654155 00:17:06.847 09:25:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 654155 ']' 00:17:06.847 09:25:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 654155 00:17:06.847 09:25:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:17:06.847 09:25:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:06.847 09:25:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 654155 00:17:06.847 09:25:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:06.847 09:25:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:06.847 09:25:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 654155' 00:17:06.847 killing process with pid 654155 00:17:06.847 09:25:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 654155 00:17:06.847 Received shutdown signal, test time was about 10.000000 seconds 00:17:06.847 00:17:06.847 Latency(us) 00:17:06.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.847 =================================================================================================================== 00:17:06.847 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:06.847 09:25:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 654155 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:07.106 rmmod nvme_tcp 00:17:07.106 rmmod nvme_fabrics 00:17:07.106 rmmod nvme_keyring 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 653864 ']' 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 653864 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 653864 ']' 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 653864 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 653864 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 653864' 00:17:07.106 killing process with pid 653864 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 653864 00:17:07.106 09:25:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 653864 00:17:07.367 09:25:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:07.367 09:25:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:07.367 09:25:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:07.367 09:25:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:07.367 09:25:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:07.367 09:25:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.367 09:25:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.367 09:25:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.277 09:25:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:09.536 00:17:09.536 real 0m22.610s 00:17:09.536 user 0m25.450s 00:17:09.536 sys 0m7.186s 00:17:09.536 09:25:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:09.536 09:25:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:09.536 ************************************ 00:17:09.536 END TEST nvmf_queue_depth 00:17:09.536 ************************************ 00:17:09.536 09:25:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:09.536 09:25:56 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:09.536 09:25:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:09.536 09:25:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:09.536 09:25:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:09.536 ************************************ 00:17:09.536 START TEST nvmf_target_multipath 00:17:09.536 ************************************ 00:17:09.536 09:25:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:09.536 * Looking for test storage... 00:17:09.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:09.537 09:25:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:17.671 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:17.671 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:17.671 Found net devices under 0000:31:00.0: cvl_0_0 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:17.671 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:17.672 Found net devices under 0000:31:00.1: cvl_0_1 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:17.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:17:17.672 00:17:17.672 --- 10.0.0.2 ping statistics --- 00:17:17.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.672 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:17.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:17:17.672 00:17:17.672 --- 10.0.0.1 ping statistics --- 00:17:17.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.672 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:17.672 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:17.932 09:26:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:17.932 09:26:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:17.932 only one NIC for nvmf test 00:17:17.932 09:26:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:17.932 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:17.932 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:17.932 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:17.932 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:17.932 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:17.932 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:17.932 rmmod nvme_tcp 00:17:17.932 rmmod nvme_fabrics 00:17:17.932 rmmod nvme_keyring 00:17:17.932 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:17.932 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:17.932 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:17.932 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:17.932 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:17.932 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:17.932 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:17.932 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:17.932 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:17.933 09:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.933 09:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.933 09:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:19.920 00:17:19.920 real 0m10.533s 00:17:19.920 user 0m2.276s 00:17:19.920 sys 0m6.134s 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:19.920 09:26:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:19.920 ************************************ 00:17:19.920 END TEST nvmf_target_multipath 00:17:19.920 ************************************ 00:17:20.180 09:26:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:20.180 09:26:07 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:20.180 09:26:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:20.180 09:26:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:20.180 09:26:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:20.180 ************************************ 00:17:20.180 START TEST nvmf_zcopy 00:17:20.180 ************************************ 00:17:20.180 09:26:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:20.180 * Looking for test storage... 00:17:20.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:20.180 09:26:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:20.180 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:20.180 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.180 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.180 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.180 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.180 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.180 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.180 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.180 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.180 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.180 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.180 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:20.180 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:20.180 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.180 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:20.181 09:26:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:28.306 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:28.306 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:28.306 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:28.306 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:28.306 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:28.306 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:28.306 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:28.306 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:28.306 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:28.306 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:28.306 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:28.306 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:28.307 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:28.307 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:28.307 Found net devices under 0000:31:00.0: cvl_0_0 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:28.307 Found net devices under 0000:31:00.1: cvl_0_1 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:28.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:17:28.307 00:17:28.307 --- 10.0.0.2 ping statistics --- 00:17:28.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.307 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:17:28.307 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:28.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:17:28.307 00:17:28.308 --- 10.0.0.1 ping statistics --- 00:17:28.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.308 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=665701 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 665701 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 665701 ']' 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:28.308 09:26:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:28.569 [2024-07-15 09:26:15.547084] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:17:28.569 [2024-07-15 09:26:15.547151] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.569 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.569 [2024-07-15 09:26:15.641533] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.569 [2024-07-15 09:26:15.735128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.569 [2024-07-15 09:26:15.735185] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.569 [2024-07-15 09:26:15.735192] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.569 [2024-07-15 09:26:15.735199] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.569 [2024-07-15 09:26:15.735205] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.569 [2024-07-15 09:26:15.735230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.183 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.183 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:17:29.183 09:26:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:29.183 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:29.183 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:29.183 09:26:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.183 09:26:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:29.183 09:26:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:29.183 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.183 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:29.183 [2024-07-15 09:26:16.367437] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:29.183 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.183 09:26:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:29.183 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.183 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:29.183 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.183 09:26:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:29.183 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.183 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:29.441 [2024-07-15 09:26:16.383602] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:29.441 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.441 09:26:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:29.441 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.441 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:29.441 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.441 09:26:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:29.441 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.441 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:29.441 malloc0 00:17:29.441 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.441 09:26:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:29.441 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.441 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:29.441 09:26:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.441 09:26:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:29.442 09:26:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:29.442 09:26:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:29.442 09:26:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:29.442 09:26:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:29.442 09:26:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:29.442 { 00:17:29.442 "params": { 00:17:29.442 "name": "Nvme$subsystem", 00:17:29.442 "trtype": "$TEST_TRANSPORT", 00:17:29.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:29.442 "adrfam": "ipv4", 00:17:29.442 "trsvcid": "$NVMF_PORT", 00:17:29.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:29.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:29.442 "hdgst": ${hdgst:-false}, 00:17:29.442 "ddgst": ${ddgst:-false} 00:17:29.442 }, 00:17:29.442 "method": "bdev_nvme_attach_controller" 00:17:29.442 } 00:17:29.442 EOF 00:17:29.442 )") 00:17:29.442 09:26:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:29.442 09:26:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:29.442 09:26:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:29.442 09:26:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:29.442 "params": { 00:17:29.442 "name": "Nvme1", 00:17:29.442 "trtype": "tcp", 00:17:29.442 "traddr": "10.0.0.2", 00:17:29.442 "adrfam": "ipv4", 00:17:29.442 "trsvcid": "4420", 00:17:29.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:29.442 "hdgst": false, 00:17:29.442 "ddgst": false 00:17:29.442 }, 00:17:29.442 "method": "bdev_nvme_attach_controller" 00:17:29.442 }' 00:17:29.442 [2024-07-15 09:26:16.470774] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:17:29.442 [2024-07-15 09:26:16.470847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665876 ] 00:17:29.442 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.442 [2024-07-15 09:26:16.544522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.442 [2024-07-15 09:26:16.621838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.701 Running I/O for 10 seconds... 00:17:39.685 00:17:39.685 Latency(us) 00:17:39.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.685 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:39.685 Verification LBA range: start 0x0 length 0x1000 00:17:39.685 Nvme1n1 : 10.01 9543.50 74.56 0.00 0.00 13360.97 1747.63 28835.84 00:17:39.685 =================================================================================================================== 00:17:39.685 Total : 9543.50 74.56 0.00 0.00 13360.97 1747.63 28835.84 00:17:39.946 09:26:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=667888 00:17:39.946 09:26:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:39.946 09:26:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:39.946 09:26:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:39.946 09:26:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:39.946 09:26:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:39.946 09:26:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:39.946 09:26:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:39.946 09:26:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:39.946 { 00:17:39.946 "params": { 00:17:39.946 "name": "Nvme$subsystem", 00:17:39.946 "trtype": "$TEST_TRANSPORT", 00:17:39.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:39.946 "adrfam": "ipv4", 00:17:39.946 "trsvcid": "$NVMF_PORT", 00:17:39.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:39.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:39.946 "hdgst": ${hdgst:-false}, 00:17:39.946 "ddgst": ${ddgst:-false} 00:17:39.946 }, 00:17:39.946 "method": "bdev_nvme_attach_controller" 00:17:39.946 } 00:17:39.946 EOF 00:17:39.946 )") 00:17:39.946 09:26:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:39.946 [2024-07-15 09:26:26.936305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:26.936333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 09:26:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:39.946 [2024-07-15 09:26:26.944292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:26.944305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 09:26:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:39.946 09:26:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:39.946 "params": { 00:17:39.946 "name": "Nvme1", 00:17:39.946 "trtype": "tcp", 00:17:39.946 "traddr": "10.0.0.2", 00:17:39.946 "adrfam": "ipv4", 00:17:39.946 "trsvcid": "4420", 00:17:39.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:39.946 "hdgst": false, 00:17:39.946 "ddgst": false 00:17:39.946 }, 00:17:39.946 "method": "bdev_nvme_attach_controller" 00:17:39.946 }' 00:17:39.946 [2024-07-15 09:26:26.952311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:26.952319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:26.960332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:26.960338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:26.968351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:26.968357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:26.976372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:26.976379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:26.977483] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:17:39.946 [2024-07-15 09:26:26.977527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667888 ] 00:17:39.946 [2024-07-15 09:26:26.984391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:26.984398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:26.992410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:26.992417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:27.000431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:27.000437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.946 [2024-07-15 09:26:27.008450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:27.008457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:27.016471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:27.016477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:27.024491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:27.024497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:27.032512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:27.032518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:27.040532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:27.040538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:27.041928] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.946 [2024-07-15 09:26:27.048552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:27.048561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:27.056573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:27.056583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:27.064593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:27.064600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:27.072613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:27.072621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:27.080635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:27.080647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:27.088653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:27.088660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:27.096675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:27.096681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:27.104696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:27.104703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:27.105999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.946 [2024-07-15 09:26:27.112716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:27.112723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:27.120741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:27.120756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:27.128765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:27.128775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:27.136785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:27.136792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.946 [2024-07-15 09:26:27.144802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.946 [2024-07-15 09:26:27.144809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.152822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.152829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.160841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.160849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.168862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.168870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.176883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.176890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.184911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.184924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.192929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.192938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.200951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.200960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.208971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.208980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.216989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.216996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.225010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.225017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.233030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.233037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.241051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.241059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.249073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.249082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.257096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.257104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.265117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.265126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.273135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.273142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.281207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.281220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.289177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.289187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 Running I/O for 5 seconds... 00:17:40.207 [2024-07-15 09:26:27.297199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.297207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.308485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.308504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.317407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.317424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.326389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.326406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.335697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.335714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.344538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.344554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.353137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.353153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.361656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.361671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.207 [2024-07-15 09:26:27.370689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.207 [2024-07-15 09:26:27.370705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.208 [2024-07-15 09:26:27.379552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.208 [2024-07-15 09:26:27.379568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.208 [2024-07-15 09:26:27.388672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.208 [2024-07-15 09:26:27.388688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.208 [2024-07-15 09:26:27.397469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.208 [2024-07-15 09:26:27.397487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.208 [2024-07-15 09:26:27.406083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.208 [2024-07-15 09:26:27.406099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.415208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.468 [2024-07-15 09:26:27.415224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.424245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.468 [2024-07-15 09:26:27.424261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.432570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.468 [2024-07-15 09:26:27.432586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.441177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.468 [2024-07-15 09:26:27.441192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.450445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.468 [2024-07-15 09:26:27.450462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.459136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.468 [2024-07-15 09:26:27.459152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.468281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.468 [2024-07-15 09:26:27.468297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.476890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.468 [2024-07-15 09:26:27.476905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.486321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.468 [2024-07-15 09:26:27.486336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.494998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.468 [2024-07-15 09:26:27.495014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.504235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.468 [2024-07-15 09:26:27.504251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.513514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.468 [2024-07-15 09:26:27.513530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.522788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.468 [2024-07-15 09:26:27.522804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.532033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.468 [2024-07-15 09:26:27.532052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.541284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.468 [2024-07-15 09:26:27.541300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.550370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.468 [2024-07-15 09:26:27.550386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.559180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.468 [2024-07-15 09:26:27.559197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.568388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.468 [2024-07-15 09:26:27.568404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.577047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.468 [2024-07-15 09:26:27.577063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.468 [2024-07-15 09:26:27.585716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.469 [2024-07-15 09:26:27.585732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.469 [2024-07-15 09:26:27.594430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.469 [2024-07-15 09:26:27.594446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.469 [2024-07-15 09:26:27.603284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.469 [2024-07-15 09:26:27.603299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.469 [2024-07-15 09:26:27.612363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.469 [2024-07-15 09:26:27.612379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.469 [2024-07-15 09:26:27.620982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.469 [2024-07-15 09:26:27.620998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.469 [2024-07-15 09:26:27.630224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.469 [2024-07-15 09:26:27.630239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.469 [2024-07-15 09:26:27.638982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.469 [2024-07-15 09:26:27.638997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.469 [2024-07-15 09:26:27.648181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.469 [2024-07-15 09:26:27.648200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.469 [2024-07-15 09:26:27.657476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.469 [2024-07-15 09:26:27.657492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.469 [2024-07-15 09:26:27.666552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.469 [2024-07-15 09:26:27.666569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.675760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.675777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.684845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.684862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.693648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.693664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.702767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.702787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.711522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.711539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.720744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.720765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.729376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.729392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.738099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.738116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.747327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.747344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.755928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.755944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.765116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.765132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.774350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.774367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.783808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.783824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.793121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.793137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.802432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.802449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.811821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.811837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.820813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.820830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.829712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.829729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.838714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.838730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.848062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.848080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.857061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.857078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.865776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.865793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.874899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.874919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.883608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.883623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.891793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.891811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.901040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.901056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.909569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.909584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.918249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.918266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.729 [2024-07-15 09:26:27.927487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.729 [2024-07-15 09:26:27.927504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.989 [2024-07-15 09:26:27.935871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.989 [2024-07-15 09:26:27.935887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.989 [2024-07-15 09:26:27.944368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.989 [2024-07-15 09:26:27.944383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.989 [2024-07-15 09:26:27.953425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.989 [2024-07-15 09:26:27.953441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.989 [2024-07-15 09:26:27.962221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.989 [2024-07-15 09:26:27.962237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.989 [2024-07-15 09:26:27.970710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.989 [2024-07-15 09:26:27.970726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.989 [2024-07-15 09:26:27.979883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.989 [2024-07-15 09:26:27.979900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.989 [2024-07-15 09:26:27.988470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.989 [2024-07-15 09:26:27.988487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.989 [2024-07-15 09:26:27.997072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.989 [2024-07-15 09:26:27.997088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.989 [2024-07-15 09:26:28.006246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.989 [2024-07-15 09:26:28.006261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.989 [2024-07-15 09:26:28.015555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.989 [2024-07-15 09:26:28.015571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.989 [2024-07-15 09:26:28.024418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.989 [2024-07-15 09:26:28.024433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.989 [2024-07-15 09:26:28.033535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.989 [2024-07-15 09:26:28.033551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.989 [2024-07-15 09:26:28.042341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.989 [2024-07-15 09:26:28.042360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.989 [2024-07-15 09:26:28.051453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.989 [2024-07-15 09:26:28.051469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.989 [2024-07-15 09:26:28.060668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.989 [2024-07-15 09:26:28.060684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.989 [2024-07-15 09:26:28.069585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.989 [2024-07-15 09:26:28.069602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.989 [2024-07-15 09:26:28.078813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.990 [2024-07-15 09:26:28.078829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.990 [2024-07-15 09:26:28.088087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.990 [2024-07-15 09:26:28.088104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.990 [2024-07-15 09:26:28.096750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.990 [2024-07-15 09:26:28.096770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.990 [2024-07-15 09:26:28.105400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.990 [2024-07-15 09:26:28.105417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.990 [2024-07-15 09:26:28.114192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.990 [2024-07-15 09:26:28.114208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.990 [2024-07-15 09:26:28.123197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.990 [2024-07-15 09:26:28.123212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.990 [2024-07-15 09:26:28.132541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.990 [2024-07-15 09:26:28.132557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.990 [2024-07-15 09:26:28.141921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.990 [2024-07-15 09:26:28.141937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.990 [2024-07-15 09:26:28.150809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.990 [2024-07-15 09:26:28.150826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.990 [2024-07-15 09:26:28.159633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.990 [2024-07-15 09:26:28.159649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.990 [2024-07-15 09:26:28.168467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.990 [2024-07-15 09:26:28.168483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.990 [2024-07-15 09:26:28.177871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.990 [2024-07-15 09:26:28.177886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.990 [2024-07-15 09:26:28.186653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.990 [2024-07-15 09:26:28.186669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.195896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.195913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.204633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.204648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.213807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.213824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.222389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.222405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.231120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.231136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.240812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.240828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.249022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.249038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.258075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.258091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.267285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.267302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.276309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.276325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.285508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.285524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.294277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.294293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.303408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.303424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.312200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.312216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.321349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.321365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.330682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.330698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.339848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.339864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.349115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.349130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.357961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.357977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.367154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.367170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.376249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.376264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.385248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.385264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.394253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.394268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.403556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.403573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.412207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.412224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.421255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.421271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.429776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.429792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.438439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.438455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.250 [2024-07-15 09:26:28.447592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.250 [2024-07-15 09:26:28.447608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.511 [2024-07-15 09:26:28.456913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.511 [2024-07-15 09:26:28.456929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.511 [2024-07-15 09:26:28.466223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.511 [2024-07-15 09:26:28.466238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.511 [2024-07-15 09:26:28.474659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.511 [2024-07-15 09:26:28.474675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.511 [2024-07-15 09:26:28.484028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.511 [2024-07-15 09:26:28.484044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.511 [2024-07-15 09:26:28.492634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.511 [2024-07-15 09:26:28.492650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.511 [2024-07-15 09:26:28.501655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.511 [2024-07-15 09:26:28.501671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.511 [2024-07-15 09:26:28.510382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.511 [2024-07-15 09:26:28.510399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.511 [2024-07-15 09:26:28.519620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.511 [2024-07-15 09:26:28.519636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.511 [2024-07-15 09:26:28.528506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.511 [2024-07-15 09:26:28.528522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.511 [2024-07-15 09:26:28.537311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.511 [2024-07-15 09:26:28.537327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.511 [2024-07-15 09:26:28.546494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.511 [2024-07-15 09:26:28.546510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.511 [2024-07-15 09:26:28.555831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.511 [2024-07-15 09:26:28.555848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.511 [2024-07-15 09:26:28.564505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.511 [2024-07-15 09:26:28.564522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.511 [2024-07-15 09:26:28.573315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.511 [2024-07-15 09:26:28.573331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.511 [2024-07-15 09:26:28.582397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.511 [2024-07-15 09:26:28.582413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.511 [2024-07-15 09:26:28.591296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.511 [2024-07-15 09:26:28.591312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.512 [2024-07-15 09:26:28.600862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.512 [2024-07-15 09:26:28.600878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.512 [2024-07-15 09:26:28.609191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.512 [2024-07-15 09:26:28.609207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.512 [2024-07-15 09:26:28.618313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.512 [2024-07-15 09:26:28.618330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.512 [2024-07-15 09:26:28.627566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.512 [2024-07-15 09:26:28.627583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.512 [2024-07-15 09:26:28.636364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.512 [2024-07-15 09:26:28.636381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.512 [2024-07-15 09:26:28.644999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.512 [2024-07-15 09:26:28.645015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.512 [2024-07-15 09:26:28.654206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.512 [2024-07-15 09:26:28.654222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.512 [2024-07-15 09:26:28.663059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.512 [2024-07-15 09:26:28.663075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.512 [2024-07-15 09:26:28.671603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.512 [2024-07-15 09:26:28.671620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.512 [2024-07-15 09:26:28.680661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.512 [2024-07-15 09:26:28.680677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.512 [2024-07-15 09:26:28.689916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.512 [2024-07-15 09:26:28.689932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.512 [2024-07-15 09:26:28.698599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.512 [2024-07-15 09:26:28.698614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.512 [2024-07-15 09:26:28.707590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.512 [2024-07-15 09:26:28.707605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.716716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.716733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.725535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.725551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.734259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.734274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.742912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.742927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.751419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.751434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.760214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.760229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.769051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.769067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.777432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.777448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.786349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.786365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.795132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.795148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.803985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.804001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.813127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.813144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.821919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.821937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.830518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.830535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.840367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.840383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.848982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.848997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.858211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.858227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.867053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.867069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.876275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.876292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.885635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.885655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.895020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.895036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.903187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.903203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.912504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.912522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.921285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.921302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.930421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.930438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.939135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.939152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.772 [2024-07-15 09:26:28.947866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.772 [2024-07-15 09:26:28.947883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.773 [2024-07-15 09:26:28.956511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.773 [2024-07-15 09:26:28.956526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.773 [2024-07-15 09:26:28.965413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.773 [2024-07-15 09:26:28.965430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:28.978985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:28.979003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:28.987153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:28.987169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:28.995985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:28.996001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.005238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.005254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.014029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.014045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.022992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.023007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.031978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.031995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.040081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.040096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.049110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.049126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.057786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.057807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.067045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.067061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.075619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.075636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.084871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.084888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.094040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.094056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.102647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.102663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.111897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.111913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.120610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.120626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.129919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.129935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.139172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.139189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.148114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.148130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.157163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.157179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.166332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.166348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.175448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.175464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.184764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.184780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.193081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.033 [2024-07-15 09:26:29.193097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.033 [2024-07-15 09:26:29.202227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.034 [2024-07-15 09:26:29.202244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.034 [2024-07-15 09:26:29.211063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.034 [2024-07-15 09:26:29.211080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.034 [2024-07-15 09:26:29.220130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.034 [2024-07-15 09:26:29.220146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.294 [2024-07-15 09:26:29.233962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.294 [2024-07-15 09:26:29.233982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.294 [2024-07-15 09:26:29.242313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.294 [2024-07-15 09:26:29.242330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.294 [2024-07-15 09:26:29.251148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.294 [2024-07-15 09:26:29.251164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.294 [2024-07-15 09:26:29.259883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.294 [2024-07-15 09:26:29.259899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.294 [2024-07-15 09:26:29.268445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.294 [2024-07-15 09:26:29.268462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.294 [2024-07-15 09:26:29.277516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.294 [2024-07-15 09:26:29.277533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.294 [2024-07-15 09:26:29.286011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.294 [2024-07-15 09:26:29.286028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.294 [2024-07-15 09:26:29.294942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.294 [2024-07-15 09:26:29.294958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.294 [2024-07-15 09:26:29.303538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.294 [2024-07-15 09:26:29.303554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.294 [2024-07-15 09:26:29.312096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.294 [2024-07-15 09:26:29.312112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.294 [2024-07-15 09:26:29.320720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.294 [2024-07-15 09:26:29.320736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.294 [2024-07-15 09:26:29.329655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.294 [2024-07-15 09:26:29.329672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.294 [2024-07-15 09:26:29.338870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.294 [2024-07-15 09:26:29.338886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.294 [2024-07-15 09:26:29.347576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.294 [2024-07-15 09:26:29.347593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.294 [2024-07-15 09:26:29.356322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.294 [2024-07-15 09:26:29.356339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.294 [2024-07-15 09:26:29.364326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.294 [2024-07-15 09:26:29.364342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.294 [2024-07-15 09:26:29.373382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.294 [2024-07-15 09:26:29.373399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.294 [2024-07-15 09:26:29.381522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.294 [2024-07-15 09:26:29.381538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.294 [2024-07-15 09:26:29.390597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.294 [2024-07-15 09:26:29.390612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.295 [2024-07-15 09:26:29.398829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.295 [2024-07-15 09:26:29.398853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.295 [2024-07-15 09:26:29.407496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.295 [2024-07-15 09:26:29.407512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.295 [2024-07-15 09:26:29.416578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.295 [2024-07-15 09:26:29.416595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.295 [2024-07-15 09:26:29.425407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.295 [2024-07-15 09:26:29.425424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.295 [2024-07-15 09:26:29.434392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.295 [2024-07-15 09:26:29.434409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.295 [2024-07-15 09:26:29.443593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.295 [2024-07-15 09:26:29.443609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.295 [2024-07-15 09:26:29.452799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.295 [2024-07-15 09:26:29.452816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.295 [2024-07-15 09:26:29.461594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.295 [2024-07-15 09:26:29.461610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.295 [2024-07-15 09:26:29.470670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.295 [2024-07-15 09:26:29.470687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.295 [2024-07-15 09:26:29.479493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.295 [2024-07-15 09:26:29.479510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.295 [2024-07-15 09:26:29.488345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.295 [2024-07-15 09:26:29.488362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.555 [2024-07-15 09:26:29.497261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.555 [2024-07-15 09:26:29.497277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.555 [2024-07-15 09:26:29.506664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.555 [2024-07-15 09:26:29.506680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.555 [2024-07-15 09:26:29.515771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.555 [2024-07-15 09:26:29.515788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.555 [2024-07-15 09:26:29.524573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.555 [2024-07-15 09:26:29.524590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.555 [2024-07-15 09:26:29.533285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.555 [2024-07-15 09:26:29.533302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.555 [2024-07-15 09:26:29.542232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.555 [2024-07-15 09:26:29.542249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.555 [2024-07-15 09:26:29.550944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.555 [2024-07-15 09:26:29.550960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.555 [2024-07-15 09:26:29.560114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.555 [2024-07-15 09:26:29.560130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.555 [2024-07-15 09:26:29.569366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.555 [2024-07-15 09:26:29.569385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.555 [2024-07-15 09:26:29.577959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.555 [2024-07-15 09:26:29.577975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.555 [2024-07-15 09:26:29.587212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.555 [2024-07-15 09:26:29.587229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.555 [2024-07-15 09:26:29.595294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.555 [2024-07-15 09:26:29.595311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.555 [2024-07-15 09:26:29.603927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.555 [2024-07-15 09:26:29.603944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.555 [2024-07-15 09:26:29.612696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.555 [2024-07-15 09:26:29.612713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.555 [2024-07-15 09:26:29.621909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.555 [2024-07-15 09:26:29.621924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.555 [2024-07-15 09:26:29.631085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.555 [2024-07-15 09:26:29.631101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.555 [2024-07-15 09:26:29.639729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.555 [2024-07-15 09:26:29.639745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.555 [2024-07-15 09:26:29.649022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.555 [2024-07-15 09:26:29.649038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.555 [2024-07-15 09:26:29.657912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.556 [2024-07-15 09:26:29.657928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.556 [2024-07-15 09:26:29.666734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.556 [2024-07-15 09:26:29.666750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.556 [2024-07-15 09:26:29.675308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.556 [2024-07-15 09:26:29.675323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.556 [2024-07-15 09:26:29.683888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.556 [2024-07-15 09:26:29.683904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.556 [2024-07-15 09:26:29.693123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.556 [2024-07-15 09:26:29.693139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.556 [2024-07-15 09:26:29.701874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.556 [2024-07-15 09:26:29.701893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.556 [2024-07-15 09:26:29.709955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.556 [2024-07-15 09:26:29.709971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.556 [2024-07-15 09:26:29.718674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.556 [2024-07-15 09:26:29.718690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.556 [2024-07-15 09:26:29.727639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.556 [2024-07-15 09:26:29.727655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.556 [2024-07-15 09:26:29.736914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.556 [2024-07-15 09:26:29.736930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.556 [2024-07-15 09:26:29.745109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.556 [2024-07-15 09:26:29.745125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.556 [2024-07-15 09:26:29.753727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.556 [2024-07-15 09:26:29.753743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.816 [2024-07-15 09:26:29.761912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.816 [2024-07-15 09:26:29.761931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.816 [2024-07-15 09:26:29.770839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.816 [2024-07-15 09:26:29.770855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.780132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.780148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.789027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.789043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.797691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.797707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.806786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.806801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.815498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.815514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.825059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.825075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.834193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.834208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.842813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.842829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.852118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.852135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.860817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.860832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.869458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.869473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.878068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.878083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.887212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.887228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.896485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.896501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.905339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.905355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.914509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.914526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.923244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.923260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.932545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.932561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.941703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.941719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.950891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.950907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.960073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.960090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.969271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.969287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.978106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.978122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.986932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.986948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:29.996245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:29.996261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:30.006067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:30.006083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.817 [2024-07-15 09:26:30.015316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.817 [2024-07-15 09:26:30.015333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.023848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.023864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.033069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.033084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.042365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.042381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.050500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.050516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.059376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.059392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.068380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.068397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.077680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.077696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.086628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.086644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.095366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.095381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.104369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.104386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.113083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.113099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.122170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.122186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.131094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.131110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.140222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.140238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.149542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.149558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.158970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.158985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.167790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.167805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.176902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.176918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.185789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.185804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.194402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.194418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.203147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.203163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.212360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.212375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.220723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.220739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.229385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.229401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.243348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.243368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.251786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.251802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.260321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.260337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.077 [2024-07-15 09:26:30.269492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.077 [2024-07-15 09:26:30.269507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.277719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.277734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.286747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.286767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.294969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.294984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.303859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.303875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.312708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.312724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.321316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.321332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.329447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.329463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.338133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.338148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.347587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.347603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.355710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.355726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.364790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.364806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.373428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.373444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.382923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.382939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.392228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.392243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.401009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.401024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.410127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.410145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.419523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.419540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.428264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.428280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.437656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.437672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.445815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.445831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.454657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.454672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.463930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.463945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.472672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.472688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.481358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.481374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.490565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.490581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.499375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.499391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.508664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.508680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.517368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.517384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.526466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.526482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.337 [2024-07-15 09:26:30.535132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.337 [2024-07-15 09:26:30.535148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.544214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.544230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.552464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.552479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.561653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.561669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.571114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.571130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.580691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.580710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.589101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.589117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.597762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.597778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.606602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.606617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.615662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.615678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.624869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.624885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.633476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.633491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.642559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.642576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.651814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.651830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.660771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.660788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.669500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.669516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.678624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.678641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.687506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.687522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.696168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.696184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.705441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.705457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.714180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.714195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.723485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.723501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.732732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.732748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.741956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.741972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.751059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.751078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.759818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.759834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.769005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.769022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.777737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.777760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.786949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.786966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.597 [2024-07-15 09:26:30.796105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.597 [2024-07-15 09:26:30.796120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.856 [2024-07-15 09:26:30.804895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.856 [2024-07-15 09:26:30.804911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.856 [2024-07-15 09:26:30.814686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.856 [2024-07-15 09:26:30.814702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.856 [2024-07-15 09:26:30.822719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.856 [2024-07-15 09:26:30.822735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.856 [2024-07-15 09:26:30.831921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.856 [2024-07-15 09:26:30.831937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.856 [2024-07-15 09:26:30.840534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.856 [2024-07-15 09:26:30.840550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.856 [2024-07-15 09:26:30.849758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.856 [2024-07-15 09:26:30.849775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.856 [2024-07-15 09:26:30.858520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.856 [2024-07-15 09:26:30.858536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.856 [2024-07-15 09:26:30.867655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.856 [2024-07-15 09:26:30.867671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.856 [2024-07-15 09:26:30.876393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.856 [2024-07-15 09:26:30.876408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.856 [2024-07-15 09:26:30.885554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.856 [2024-07-15 09:26:30.885570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.856 [2024-07-15 09:26:30.893624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.856 [2024-07-15 09:26:30.893640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.856 [2024-07-15 09:26:30.902794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.856 [2024-07-15 09:26:30.902811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.856 [2024-07-15 09:26:30.912209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.856 [2024-07-15 09:26:30.912226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.856 [2024-07-15 09:26:30.921165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.857 [2024-07-15 09:26:30.921185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.857 [2024-07-15 09:26:30.930416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.857 [2024-07-15 09:26:30.930433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.857 [2024-07-15 09:26:30.943792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.857 [2024-07-15 09:26:30.943809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.857 [2024-07-15 09:26:30.952103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.857 [2024-07-15 09:26:30.952119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.857 [2024-07-15 09:26:30.961377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.857 [2024-07-15 09:26:30.961393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.857 [2024-07-15 09:26:30.970094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.857 [2024-07-15 09:26:30.970110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.857 [2024-07-15 09:26:30.979130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.857 [2024-07-15 09:26:30.979147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.857 [2024-07-15 09:26:30.987873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.857 [2024-07-15 09:26:30.987888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.857 [2024-07-15 09:26:30.996689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.857 [2024-07-15 09:26:30.996705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.857 [2024-07-15 09:26:31.005668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.857 [2024-07-15 09:26:31.005685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.857 [2024-07-15 09:26:31.014993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.857 [2024-07-15 09:26:31.015008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.857 [2024-07-15 09:26:31.023979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.857 [2024-07-15 09:26:31.023994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.857 [2024-07-15 09:26:31.032480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.857 [2024-07-15 09:26:31.032496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.857 [2024-07-15 09:26:31.042105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.857 [2024-07-15 09:26:31.042122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.857 [2024-07-15 09:26:31.051461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.857 [2024-07-15 09:26:31.051477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.060320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.060336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.069732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.069748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.078803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.078819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.087621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.087637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.096661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.096678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.105717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.105733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.115153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.115169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.123911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.123927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.132626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.132642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.141677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.141692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.151160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.151176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.159953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.159969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.168939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.168955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.177985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.178001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.186153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.186169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.194743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.194765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.203529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.203545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.212326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.212342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.221451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.221467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.229610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.229625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.238127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.238144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.252004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.252021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.260290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.260306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.268883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.268899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.277905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.277921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.287366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.287382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.296736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.296757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.304794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.304810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.117 [2024-07-15 09:26:31.313564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.117 [2024-07-15 09:26:31.313580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.377 [2024-07-15 09:26:31.322673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.322689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.331563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.331579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.340788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.340803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.349644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.349659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.358176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.358191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.366637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.366652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.375814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.375830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.384603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.384620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.393566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.393581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.402406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.402422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.411609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.411625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.420208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.420224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.429243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.429259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.438391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.438408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.447071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.447087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.455710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.455725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.464929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.464945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.474225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.474240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.482983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.482998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.492331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.492347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.501142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.501158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.510218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.510233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.518326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.518341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.527503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.527519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.536165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.536180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.544831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.544846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.553909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.553924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.563135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.563150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.378 [2024-07-15 09:26:31.571383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.378 [2024-07-15 09:26:31.571398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.638 [2024-07-15 09:26:31.580748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.638 [2024-07-15 09:26:31.580770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.589954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.589969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.599046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.599065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.608522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.608537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.616642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.616657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.625790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.625806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.634701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.634717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.643686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.643703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.651834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.651850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.660504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.660519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.668926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.668942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.677617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.677632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.686221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.686237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.695408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.695424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.703610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.703625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.713186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.713201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.722242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.722258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.731128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.731142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.739896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.739912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.749030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.749046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.758315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.758331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.767940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.767959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.776981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.776997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.785783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.785798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.794345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.794360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.803062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.803078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.811945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.811961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.820821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.820836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.639 [2024-07-15 09:26:31.829837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.639 [2024-07-15 09:26:31.829853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.898 [2024-07-15 09:26:31.839193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.898 [2024-07-15 09:26:31.839209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.898 [2024-07-15 09:26:31.847830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.898 [2024-07-15 09:26:31.847845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.898 [2024-07-15 09:26:31.856884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.898 [2024-07-15 09:26:31.856899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.898 [2024-07-15 09:26:31.865319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.898 [2024-07-15 09:26:31.865335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.898 [2024-07-15 09:26:31.874186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.898 [2024-07-15 09:26:31.874201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:31.883204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:31.883219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:31.891922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:31.891938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:31.901003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:31.901019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:31.910125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:31.910141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:31.919384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:31.919401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:31.928462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:31.928477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:31.937706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:31.937725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:31.946987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:31.947003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:31.956166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:31.956181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:31.965179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:31.965195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:31.974509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:31.974524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:31.983762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:31.983778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:31.992470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:31.992485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:32.001086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:32.001102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:32.010331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:32.010347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:32.019216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:32.019231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:32.028100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:32.028116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:32.037258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:32.037274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:32.046170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:32.046186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:32.054840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:32.054855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:32.063969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:32.063985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:32.072651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:32.072666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:32.081421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:32.081436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.899 [2024-07-15 09:26:32.090519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.899 [2024-07-15 09:26:32.090535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.099419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.099435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.108450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.108469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.117610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.117625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.126259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.126274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.135319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.135334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.143939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.143955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.153279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.153296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.162557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.162573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.170567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.170584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.179676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.179692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.189396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.189411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.197493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.197509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.206587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.206602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.215293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.215311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.223912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.223926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.232990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.233005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.242596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.242612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.252051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.252067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.260834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.260849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.269486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.269501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.279194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.279215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.287510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.287525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.296253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.296268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.304761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.304777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.310996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.311009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 00:17:45.160 Latency(us) 00:17:45.160 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.160 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:45.160 Nvme1n1 : 5.01 18304.88 143.01 0.00 0.00 6985.82 2990.08 15510.19 00:17:45.160 =================================================================================================================== 00:17:45.160 Total : 18304.88 143.01 0.00 0.00 6985.82 2990.08 15510.19 00:17:45.160 [2024-07-15 09:26:32.319011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.319023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.327030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.327041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.160 [2024-07-15 09:26:32.335057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.160 [2024-07-15 09:26:32.335065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.161 [2024-07-15 09:26:32.343075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.161 [2024-07-15 09:26:32.343085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.161 [2024-07-15 09:26:32.351094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.161 [2024-07-15 09:26:32.351102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.161 [2024-07-15 09:26:32.359114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.161 [2024-07-15 09:26:32.359123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.420 [2024-07-15 09:26:32.367133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.420 [2024-07-15 09:26:32.367142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.420 [2024-07-15 09:26:32.375152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.420 [2024-07-15 09:26:32.375160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.420 [2024-07-15 09:26:32.383173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.420 [2024-07-15 09:26:32.383181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.420 [2024-07-15 09:26:32.391194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.420 [2024-07-15 09:26:32.391202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.420 [2024-07-15 09:26:32.399217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.420 [2024-07-15 09:26:32.399224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.420 [2024-07-15 09:26:32.407238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.420 [2024-07-15 09:26:32.407248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.420 [2024-07-15 09:26:32.415256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.420 [2024-07-15 09:26:32.415264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.420 [2024-07-15 09:26:32.423276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.420 [2024-07-15 09:26:32.423284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.420 [2024-07-15 09:26:32.431295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.420 [2024-07-15 09:26:32.431303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.420 [2024-07-15 09:26:32.439316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.420 [2024-07-15 09:26:32.439323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (667888) - No such process 00:17:45.420 09:26:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 667888 00:17:45.420 09:26:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:45.420 09:26:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.420 09:26:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:45.420 09:26:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.420 09:26:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:45.420 09:26:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.420 09:26:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:45.420 delay0 00:17:45.420 09:26:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.420 09:26:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:45.420 09:26:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.420 09:26:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:45.420 09:26:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.420 09:26:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:45.420 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.420 [2024-07-15 09:26:32.568127] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:53.546 [2024-07-15 09:26:39.774155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1d50 is same with the state(5) to be set 00:17:53.546 [2024-07-15 09:26:39.774190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1d50 is same with the state(5) to be set 00:17:53.546 [2024-07-15 09:26:39.774195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1d50 is same with the state(5) to be set 00:17:53.546 [2024-07-15 09:26:39.774200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1d50 is same with the state(5) to be set 00:17:53.546 Initializing NVMe Controllers 00:17:53.546 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:53.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:53.546 Initialization complete. Launching workers. 00:17:53.546 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 238, failed: 30150 00:17:53.546 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 30257, failed to submit 131 00:17:53.546 success 30174, unsuccess 83, failed 0 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.546 rmmod nvme_tcp 00:17:53.546 rmmod nvme_fabrics 00:17:53.546 rmmod nvme_keyring 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 665701 ']' 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 665701 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 665701 ']' 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 665701 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 665701 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 665701' 00:17:53.546 killing process with pid 665701 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 665701 00:17:53.546 09:26:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 665701 00:17:53.546 09:26:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:53.546 09:26:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:53.546 09:26:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:53.546 09:26:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.546 09:26:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:53.546 09:26:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.546 09:26:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.546 09:26:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.927 09:26:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:54.927 00:17:54.927 real 0m34.942s 00:17:54.927 user 0m45.896s 00:17:54.927 sys 0m12.082s 00:17:54.927 09:26:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:54.927 09:26:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:54.927 ************************************ 00:17:54.927 END TEST nvmf_zcopy 00:17:54.927 ************************************ 00:17:55.189 09:26:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:55.189 09:26:42 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:55.189 09:26:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:55.189 09:26:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:55.189 09:26:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.189 ************************************ 00:17:55.189 START TEST nvmf_nmic 00:17:55.189 ************************************ 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:55.189 * Looking for test storage... 00:17:55.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:17:55.189 09:26:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:03.381 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:03.381 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:03.381 Found net devices under 0000:31:00.0: cvl_0_0 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:03.381 Found net devices under 0000:31:00.1: cvl_0_1 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:03.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:18:03.381 00:18:03.381 --- 10.0.0.2 ping statistics --- 00:18:03.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.381 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:03.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:18:03.381 00:18:03.381 --- 10.0.0.1 ping statistics --- 00:18:03.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.381 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:03.381 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:03.642 09:26:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:03.642 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:03.642 09:26:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:03.642 09:26:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:03.642 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=675234 00:18:03.642 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 675234 00:18:03.642 09:26:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:03.642 09:26:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 675234 ']' 00:18:03.642 09:26:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.642 09:26:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.642 09:26:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.642 09:26:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.642 09:26:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:03.642 [2024-07-15 09:26:50.666920] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:18:03.642 [2024-07-15 09:26:50.666974] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.642 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.642 [2024-07-15 09:26:50.743251] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:03.642 [2024-07-15 09:26:50.813238] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.642 [2024-07-15 09:26:50.813275] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.642 [2024-07-15 09:26:50.813283] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.642 [2024-07-15 09:26:50.813289] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.642 [2024-07-15 09:26:50.813295] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.642 [2024-07-15 09:26:50.813435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.642 [2024-07-15 09:26:50.813552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.642 [2024-07-15 09:26:50.813715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.642 [2024-07-15 09:26:50.813715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:04.579 [2024-07-15 09:26:51.490328] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:04.579 Malloc0 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:04.579 [2024-07-15 09:26:51.549632] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:04.579 test case1: single bdev can't be used in multiple subsystems 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:04.579 [2024-07-15 09:26:51.585590] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:04.579 [2024-07-15 09:26:51.585608] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:04.579 [2024-07-15 09:26:51.585616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.579 request: 00:18:04.579 { 00:18:04.579 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:04.579 "namespace": { 00:18:04.579 "bdev_name": "Malloc0", 00:18:04.579 "no_auto_visible": false 00:18:04.579 }, 00:18:04.579 "method": "nvmf_subsystem_add_ns", 00:18:04.579 "req_id": 1 00:18:04.579 } 00:18:04.579 Got JSON-RPC error response 00:18:04.579 response: 00:18:04.579 { 00:18:04.579 "code": -32602, 00:18:04.579 "message": "Invalid parameters" 00:18:04.579 } 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:04.579 Adding namespace failed - expected result. 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:04.579 test case2: host connect to nvmf target in multiple paths 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:04.579 [2024-07-15 09:26:51.597714] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.579 09:26:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:05.956 09:26:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:07.860 09:26:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:07.860 09:26:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:18:07.860 09:26:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:07.860 09:26:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:07.860 09:26:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:18:09.761 09:26:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:09.761 09:26:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:09.761 09:26:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:09.761 09:26:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:09.761 09:26:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:09.761 09:26:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:18:09.761 09:26:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:09.761 [global] 00:18:09.761 thread=1 00:18:09.761 invalidate=1 00:18:09.761 rw=write 00:18:09.761 time_based=1 00:18:09.761 runtime=1 00:18:09.761 ioengine=libaio 00:18:09.761 direct=1 00:18:09.761 bs=4096 00:18:09.761 iodepth=1 00:18:09.761 norandommap=0 00:18:09.761 numjobs=1 00:18:09.761 00:18:09.761 verify_dump=1 00:18:09.761 verify_backlog=512 00:18:09.761 verify_state_save=0 00:18:09.761 do_verify=1 00:18:09.761 verify=crc32c-intel 00:18:09.761 [job0] 00:18:09.761 filename=/dev/nvme0n1 00:18:09.761 Could not set queue depth (nvme0n1) 00:18:10.020 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:10.020 fio-3.35 00:18:10.020 Starting 1 thread 00:18:10.959 00:18:10.959 job0: (groupid=0, jobs=1): err= 0: pid=676474: Mon Jul 15 09:26:58 2024 00:18:10.959 read: IOPS=19, BW=77.7KiB/s (79.6kB/s)(80.0KiB/1029msec) 00:18:10.959 slat (nsec): min=23846, max=29723, avg=24723.20, stdev=1227.21 00:18:10.959 clat (usec): min=1216, max=42029, avg=39430.39, stdev=9007.58 00:18:10.959 lat (usec): min=1245, max=42053, avg=39455.11, stdev=9006.39 00:18:10.959 clat percentiles (usec): 00:18:10.959 | 1.00th=[ 1221], 5.00th=[ 1221], 10.00th=[41157], 20.00th=[41157], 00:18:10.959 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:18:10.959 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:10.959 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:10.959 | 99.99th=[42206] 00:18:10.959 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:18:10.959 slat (nsec): min=9301, max=62567, avg=24947.79, stdev=10046.04 00:18:10.959 clat (usec): min=251, max=618, avg=436.41, stdev=81.37 00:18:10.959 lat (usec): min=262, max=649, avg=461.36, stdev=87.39 00:18:10.959 clat percentiles (usec): 00:18:10.959 | 1.00th=[ 255], 5.00th=[ 265], 10.00th=[ 306], 20.00th=[ 367], 00:18:10.959 | 30.00th=[ 404], 40.00th=[ 441], 50.00th=[ 457], 60.00th=[ 478], 00:18:10.959 | 70.00th=[ 498], 80.00th=[ 502], 90.00th=[ 515], 95.00th=[ 537], 00:18:10.959 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 619], 99.95th=[ 619], 00:18:10.959 | 99.99th=[ 619] 00:18:10.959 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:10.959 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:10.959 lat (usec) : 500=71.99%, 750=24.25% 00:18:10.959 lat (msec) : 2=0.19%, 50=3.57% 00:18:10.959 cpu : usr=0.88%, sys=1.07%, ctx=532, majf=0, minf=1 00:18:10.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:10.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.959 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:10.959 00:18:10.959 Run status group 0 (all jobs): 00:18:10.959 READ: bw=77.7KiB/s (79.6kB/s), 77.7KiB/s-77.7KiB/s (79.6kB/s-79.6kB/s), io=80.0KiB (81.9kB), run=1029-1029msec 00:18:10.959 WRITE: bw=1990KiB/s (2038kB/s), 1990KiB/s-1990KiB/s (2038kB/s-2038kB/s), io=2048KiB (2097kB), run=1029-1029msec 00:18:10.959 00:18:10.959 Disk stats (read/write): 00:18:10.959 nvme0n1: ios=66/512, merge=0/0, ticks=668/218, in_queue=886, util=92.99% 00:18:10.959 09:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:11.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:11.220 09:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:11.220 09:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:18:11.220 09:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:11.221 rmmod nvme_tcp 00:18:11.221 rmmod nvme_fabrics 00:18:11.221 rmmod nvme_keyring 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 675234 ']' 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 675234 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 675234 ']' 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 675234 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 675234 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 675234' 00:18:11.221 killing process with pid 675234 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 675234 00:18:11.221 09:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 675234 00:18:11.482 09:26:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:11.482 09:26:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:11.482 09:26:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:11.482 09:26:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:11.482 09:26:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:11.482 09:26:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.482 09:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.482 09:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.024 09:27:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:14.024 00:18:14.024 real 0m18.424s 00:18:14.024 user 0m47.851s 00:18:14.024 sys 0m6.870s 00:18:14.024 09:27:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:14.024 09:27:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:14.024 ************************************ 00:18:14.024 END TEST nvmf_nmic 00:18:14.024 ************************************ 00:18:14.024 09:27:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:14.024 09:27:00 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:14.024 09:27:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:14.024 09:27:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:14.024 09:27:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:14.024 ************************************ 00:18:14.024 START TEST nvmf_fio_target 00:18:14.024 ************************************ 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:14.024 * Looking for test storage... 00:18:14.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:14.024 09:27:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:22.157 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:22.157 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:22.157 Found net devices under 0000:31:00.0: cvl_0_0 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:22.157 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:22.158 Found net devices under 0000:31:00.1: cvl_0_1 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:22.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:18:22.158 00:18:22.158 --- 10.0.0.2 ping statistics --- 00:18:22.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.158 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:22.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:18:22.158 00:18:22.158 --- 10.0.0.1 ping statistics --- 00:18:22.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.158 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:22.158 09:27:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:22.158 09:27:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:22.158 09:27:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:22.158 09:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:22.158 09:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.158 09:27:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:22.158 09:27:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=681603 00:18:22.158 09:27:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 681603 00:18:22.158 09:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 681603 ']' 00:18:22.158 09:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.158 09:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:22.158 09:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.158 09:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:22.158 09:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.158 [2024-07-15 09:27:09.067262] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:18:22.158 [2024-07-15 09:27:09.067314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.158 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.158 [2024-07-15 09:27:09.141038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:22.158 [2024-07-15 09:27:09.209635] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.158 [2024-07-15 09:27:09.209675] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.158 [2024-07-15 09:27:09.209682] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.158 [2024-07-15 09:27:09.209689] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.158 [2024-07-15 09:27:09.209694] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.158 [2024-07-15 09:27:09.209787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.158 [2024-07-15 09:27:09.210005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:22.158 [2024-07-15 09:27:09.210006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.158 [2024-07-15 09:27:09.209854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:22.727 09:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:22.727 09:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:18:22.727 09:27:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:22.727 09:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:22.727 09:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.727 09:27:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.727 09:27:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:22.987 [2024-07-15 09:27:10.022971] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.987 09:27:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:23.247 09:27:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:23.247 09:27:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:23.247 09:27:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:23.247 09:27:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:23.507 09:27:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:23.507 09:27:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:23.766 09:27:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:23.766 09:27:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:23.766 09:27:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:24.027 09:27:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:24.027 09:27:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:24.289 09:27:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:24.289 09:27:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:24.289 09:27:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:24.289 09:27:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:24.550 09:27:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:24.810 09:27:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:24.810 09:27:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:24.810 09:27:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:24.810 09:27:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:25.072 09:27:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:25.072 [2024-07-15 09:27:12.268104] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.332 09:27:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:25.332 09:27:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:25.594 09:27:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:26.979 09:27:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:26.979 09:27:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:18:26.979 09:27:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:26.979 09:27:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:18:26.979 09:27:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:18:26.979 09:27:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:18:29.523 09:27:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:29.523 09:27:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:29.523 09:27:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:29.523 09:27:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:18:29.523 09:27:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:29.523 09:27:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:18:29.523 09:27:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:29.523 [global] 00:18:29.523 thread=1 00:18:29.523 invalidate=1 00:18:29.523 rw=write 00:18:29.523 time_based=1 00:18:29.523 runtime=1 00:18:29.523 ioengine=libaio 00:18:29.523 direct=1 00:18:29.523 bs=4096 00:18:29.523 iodepth=1 00:18:29.523 norandommap=0 00:18:29.523 numjobs=1 00:18:29.523 00:18:29.523 verify_dump=1 00:18:29.523 verify_backlog=512 00:18:29.523 verify_state_save=0 00:18:29.523 do_verify=1 00:18:29.523 verify=crc32c-intel 00:18:29.523 [job0] 00:18:29.523 filename=/dev/nvme0n1 00:18:29.523 [job1] 00:18:29.524 filename=/dev/nvme0n2 00:18:29.524 [job2] 00:18:29.524 filename=/dev/nvme0n3 00:18:29.524 [job3] 00:18:29.524 filename=/dev/nvme0n4 00:18:29.524 Could not set queue depth (nvme0n1) 00:18:29.524 Could not set queue depth (nvme0n2) 00:18:29.524 Could not set queue depth (nvme0n3) 00:18:29.524 Could not set queue depth (nvme0n4) 00:18:29.524 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:29.524 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:29.524 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:29.524 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:29.524 fio-3.35 00:18:29.524 Starting 4 threads 00:18:30.907 00:18:30.907 job0: (groupid=0, jobs=1): err= 0: pid=683732: Mon Jul 15 09:27:17 2024 00:18:30.907 read: IOPS=18, BW=75.9KiB/s (77.7kB/s)(76.0KiB/1001msec) 00:18:30.907 slat (nsec): min=25039, max=25916, avg=25342.63, stdev=219.31 00:18:30.907 clat (usec): min=767, max=42013, avg=39362.33, stdev=9359.15 00:18:30.907 lat (usec): min=793, max=42039, avg=39387.68, stdev=9359.05 00:18:30.907 clat percentiles (usec): 00:18:30.907 | 1.00th=[ 766], 5.00th=[ 766], 10.00th=[41157], 20.00th=[41157], 00:18:30.907 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:18:30.907 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:30.907 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:30.907 | 99.99th=[42206] 00:18:30.907 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:30.907 slat (nsec): min=2965, max=70019, avg=28131.33, stdev=10304.48 00:18:30.907 clat (usec): min=152, max=752, avg=459.32, stdev=122.76 00:18:30.907 lat (usec): min=161, max=786, avg=487.45, stdev=126.86 00:18:30.907 clat percentiles (usec): 00:18:30.907 | 1.00th=[ 225], 5.00th=[ 265], 10.00th=[ 293], 20.00th=[ 347], 00:18:30.907 | 30.00th=[ 375], 40.00th=[ 424], 50.00th=[ 465], 60.00th=[ 498], 00:18:30.907 | 70.00th=[ 529], 80.00th=[ 570], 90.00th=[ 627], 95.00th=[ 660], 00:18:30.907 | 99.00th=[ 709], 99.50th=[ 734], 99.90th=[ 750], 99.95th=[ 750], 00:18:30.907 | 99.99th=[ 750] 00:18:30.907 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:18:30.907 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:30.907 lat (usec) : 250=3.58%, 500=55.74%, 750=36.91%, 1000=0.38% 00:18:30.907 lat (msec) : 50=3.39% 00:18:30.908 cpu : usr=0.80%, sys=2.00%, ctx=531, majf=0, minf=1 00:18:30.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:30.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.908 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:30.908 job1: (groupid=0, jobs=1): err= 0: pid=683750: Mon Jul 15 09:27:17 2024 00:18:30.908 read: IOPS=15, BW=63.7KiB/s (65.3kB/s)(64.0KiB/1004msec) 00:18:30.908 slat (nsec): min=24053, max=24689, avg=24249.50, stdev=152.58 00:18:30.908 clat (usec): min=41910, max=42999, avg=42233.60, stdev=444.20 00:18:30.908 lat (usec): min=41934, max=43024, avg=42257.85, stdev=444.24 00:18:30.908 clat percentiles (usec): 00:18:30.908 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:18:30.908 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:30.908 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:18:30.908 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:18:30.908 | 99.99th=[43254] 00:18:30.908 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:18:30.908 slat (nsec): min=9037, max=65663, avg=26508.85, stdev=9662.11 00:18:30.908 clat (usec): min=126, max=1743, avg=606.49, stdev=159.14 00:18:30.908 lat (usec): min=135, max=1774, avg=633.00, stdev=163.28 00:18:30.908 clat percentiles (usec): 00:18:30.908 | 1.00th=[ 151], 5.00th=[ 338], 10.00th=[ 412], 20.00th=[ 490], 00:18:30.908 | 30.00th=[ 537], 40.00th=[ 586], 50.00th=[ 627], 60.00th=[ 660], 00:18:30.908 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 775], 95.00th=[ 816], 00:18:30.908 | 99.00th=[ 938], 99.50th=[ 971], 99.90th=[ 1745], 99.95th=[ 1745], 00:18:30.908 | 99.99th=[ 1745] 00:18:30.908 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:18:30.908 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:30.908 lat (usec) : 250=2.27%, 500=19.70%, 750=61.74%, 1000=12.88% 00:18:30.908 lat (msec) : 2=0.38%, 50=3.03% 00:18:30.908 cpu : usr=0.80%, sys=1.20%, ctx=528, majf=0, minf=1 00:18:30.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:30.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.908 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:30.908 job2: (groupid=0, jobs=1): err= 0: pid=683769: Mon Jul 15 09:27:17 2024 00:18:30.908 read: IOPS=16, BW=67.2KiB/s (68.8kB/s)(68.0KiB/1012msec) 00:18:30.908 slat (nsec): min=8733, max=26345, avg=22931.65, stdev=6745.47 00:18:30.908 clat (usec): min=41906, max=42988, avg=42136.78, stdev=396.84 00:18:30.908 lat (usec): min=41932, max=43015, avg=42159.71, stdev=395.72 00:18:30.908 clat percentiles (usec): 00:18:30.908 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:18:30.908 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:30.908 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:18:30.908 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:30.908 | 99.99th=[42730] 00:18:30.908 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:18:30.908 slat (nsec): min=2421, max=46703, avg=10545.73, stdev=4143.21 00:18:30.908 clat (usec): min=173, max=988, avg=561.96, stdev=147.10 00:18:30.908 lat (usec): min=183, max=999, avg=572.51, stdev=147.59 00:18:30.908 clat percentiles (usec): 00:18:30.908 | 1.00th=[ 239], 5.00th=[ 297], 10.00th=[ 363], 20.00th=[ 429], 00:18:30.908 | 30.00th=[ 494], 40.00th=[ 529], 50.00th=[ 570], 60.00th=[ 603], 00:18:30.908 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 734], 95.00th=[ 791], 00:18:30.908 | 99.00th=[ 914], 99.50th=[ 922], 99.90th=[ 988], 99.95th=[ 988], 00:18:30.908 | 99.99th=[ 988] 00:18:30.908 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:18:30.908 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:30.908 lat (usec) : 250=1.13%, 500=29.87%, 750=58.41%, 1000=7.37% 00:18:30.908 lat (msec) : 50=3.21% 00:18:30.908 cpu : usr=0.30%, sys=1.09%, ctx=529, majf=0, minf=1 00:18:30.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:30.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.908 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:30.908 job3: (groupid=0, jobs=1): err= 0: pid=683776: Mon Jul 15 09:27:17 2024 00:18:30.908 read: IOPS=16, BW=65.6KiB/s (67.1kB/s)(68.0KiB/1037msec) 00:18:30.908 slat (nsec): min=10550, max=25364, avg=24303.00, stdev=3546.74 00:18:30.908 clat (usec): min=40999, max=42978, avg=42072.23, stdev=477.02 00:18:30.908 lat (usec): min=41024, max=43003, avg=42096.53, stdev=475.40 00:18:30.908 clat percentiles (usec): 00:18:30.908 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:18:30.908 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:30.908 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:18:30.908 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:30.908 | 99.99th=[42730] 00:18:30.908 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:18:30.908 slat (usec): min=9, max=2426, avg=37.02, stdev=124.69 00:18:30.908 clat (usec): min=178, max=909, avg=582.68, stdev=132.96 00:18:30.908 lat (usec): min=211, max=2992, avg=619.69, stdev=189.72 00:18:30.908 clat percentiles (usec): 00:18:30.908 | 1.00th=[ 277], 5.00th=[ 363], 10.00th=[ 404], 20.00th=[ 474], 00:18:30.908 | 30.00th=[ 519], 40.00th=[ 553], 50.00th=[ 594], 60.00th=[ 619], 00:18:30.908 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 783], 00:18:30.908 | 99.00th=[ 873], 99.50th=[ 906], 99.90th=[ 914], 99.95th=[ 914], 00:18:30.908 | 99.99th=[ 914] 00:18:30.908 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:18:30.908 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:30.908 lat (usec) : 250=0.19%, 500=25.14%, 750=61.81%, 1000=9.64% 00:18:30.908 lat (msec) : 50=3.21% 00:18:30.908 cpu : usr=0.29%, sys=1.74%, ctx=533, majf=0, minf=1 00:18:30.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:30.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.908 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:30.908 00:18:30.908 Run status group 0 (all jobs): 00:18:30.908 READ: bw=266KiB/s (273kB/s), 63.7KiB/s-75.9KiB/s (65.3kB/s-77.7kB/s), io=276KiB (283kB), run=1001-1037msec 00:18:30.908 WRITE: bw=7900KiB/s (8089kB/s), 1975KiB/s-2046KiB/s (2022kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1037msec 00:18:30.908 00:18:30.908 Disk stats (read/write): 00:18:30.908 nvme0n1: ios=65/512, merge=0/0, ticks=854/176, in_queue=1030, util=91.88% 00:18:30.908 nvme0n2: ios=50/512, merge=0/0, ticks=560/301, in_queue=861, util=88.46% 00:18:30.908 nvme0n3: ios=12/512, merge=0/0, ticks=506/217, in_queue=723, util=88.48% 00:18:30.908 nvme0n4: ios=82/512, merge=0/0, ticks=742/277, in_queue=1019, util=97.33% 00:18:30.909 09:27:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:30.909 [global] 00:18:30.909 thread=1 00:18:30.909 invalidate=1 00:18:30.909 rw=randwrite 00:18:30.909 time_based=1 00:18:30.909 runtime=1 00:18:30.909 ioengine=libaio 00:18:30.909 direct=1 00:18:30.909 bs=4096 00:18:30.909 iodepth=1 00:18:30.909 norandommap=0 00:18:30.909 numjobs=1 00:18:30.909 00:18:30.909 verify_dump=1 00:18:30.909 verify_backlog=512 00:18:30.909 verify_state_save=0 00:18:30.909 do_verify=1 00:18:30.909 verify=crc32c-intel 00:18:30.909 [job0] 00:18:30.909 filename=/dev/nvme0n1 00:18:30.909 [job1] 00:18:30.909 filename=/dev/nvme0n2 00:18:30.909 [job2] 00:18:30.909 filename=/dev/nvme0n3 00:18:30.909 [job3] 00:18:30.909 filename=/dev/nvme0n4 00:18:30.909 Could not set queue depth (nvme0n1) 00:18:30.909 Could not set queue depth (nvme0n2) 00:18:30.909 Could not set queue depth (nvme0n3) 00:18:30.909 Could not set queue depth (nvme0n4) 00:18:31.169 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:31.169 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:31.169 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:31.169 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:31.169 fio-3.35 00:18:31.169 Starting 4 threads 00:18:32.551 00:18:32.551 job0: (groupid=0, jobs=1): err= 0: pid=684209: Mon Jul 15 09:27:19 2024 00:18:32.551 read: IOPS=970, BW=3880KiB/s (3973kB/s)(3888KiB/1002msec) 00:18:32.551 slat (nsec): min=5888, max=68170, avg=23782.55, stdev=6913.86 00:18:32.551 clat (usec): min=171, max=1374, avg=709.07, stdev=262.55 00:18:32.551 lat (usec): min=177, max=1411, avg=732.85, stdev=263.00 00:18:32.551 clat percentiles (usec): 00:18:32.551 | 1.00th=[ 258], 5.00th=[ 338], 10.00th=[ 379], 20.00th=[ 445], 00:18:32.551 | 30.00th=[ 494], 40.00th=[ 553], 50.00th=[ 701], 60.00th=[ 865], 00:18:32.551 | 70.00th=[ 955], 80.00th=[ 996], 90.00th=[ 1037], 95.00th=[ 1057], 00:18:32.551 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1369], 99.95th=[ 1369], 00:18:32.551 | 99.99th=[ 1369] 00:18:32.551 write: IOPS=1021, BW=4088KiB/s (4186kB/s)(4096KiB/1002msec); 0 zone resets 00:18:32.551 slat (nsec): min=8272, max=69957, avg=22018.87, stdev=11791.20 00:18:32.551 clat (usec): min=101, max=715, avg=246.62, stdev=124.00 00:18:32.551 lat (usec): min=110, max=745, avg=268.64, stdev=131.88 00:18:32.551 clat percentiles (usec): 00:18:32.551 | 1.00th=[ 104], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 124], 00:18:32.551 | 30.00th=[ 139], 40.00th=[ 204], 50.00th=[ 225], 60.00th=[ 265], 00:18:32.551 | 70.00th=[ 297], 80.00th=[ 351], 90.00th=[ 424], 95.00th=[ 478], 00:18:32.551 | 99.00th=[ 586], 99.50th=[ 619], 99.90th=[ 660], 99.95th=[ 717], 00:18:32.551 | 99.99th=[ 717] 00:18:32.551 bw ( KiB/s): min= 6608, max= 6608, per=66.27%, avg=6608.00, stdev= 0.00, samples=1 00:18:32.551 iops : min= 1652, max= 1652, avg=1652.00, stdev= 0.00, samples=1 00:18:32.551 lat (usec) : 250=29.51%, 500=34.87%, 750=12.98%, 1000=13.23% 00:18:32.551 lat (msec) : 2=9.42% 00:18:32.551 cpu : usr=2.90%, sys=6.79%, ctx=1997, majf=0, minf=1 00:18:32.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:32.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.551 issued rwts: total=972,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:32.551 job1: (groupid=0, jobs=1): err= 0: pid=684226: Mon Jul 15 09:27:19 2024 00:18:32.551 read: IOPS=16, BW=67.3KiB/s (68.9kB/s)(68.0KiB/1010msec) 00:18:32.551 slat (nsec): min=24282, max=24750, avg=24518.00, stdev=131.36 00:18:32.551 clat (usec): min=1035, max=42969, avg=39587.57, stdev=9939.00 00:18:32.551 lat (usec): min=1060, max=42993, avg=39612.09, stdev=9938.98 00:18:32.551 clat percentiles (usec): 00:18:32.551 | 1.00th=[ 1037], 5.00th=[ 1037], 10.00th=[41157], 20.00th=[41681], 00:18:32.551 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:18:32.551 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:18:32.551 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:32.551 | 99.99th=[42730] 00:18:32.551 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:18:32.551 slat (nsec): min=8961, max=48925, avg=27750.52, stdev=8428.69 00:18:32.552 clat (usec): min=274, max=981, avg=622.43, stdev=131.59 00:18:32.552 lat (usec): min=284, max=1011, avg=650.18, stdev=134.41 00:18:32.552 clat percentiles (usec): 00:18:32.552 | 1.00th=[ 334], 5.00th=[ 412], 10.00th=[ 449], 20.00th=[ 506], 00:18:32.552 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 627], 60.00th=[ 660], 00:18:32.552 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 783], 95.00th=[ 848], 00:18:32.552 | 99.00th=[ 922], 99.50th=[ 938], 99.90th=[ 979], 99.95th=[ 979], 00:18:32.552 | 99.99th=[ 979] 00:18:32.552 bw ( KiB/s): min= 4096, max= 4096, per=41.08%, avg=4096.00, stdev= 0.00, samples=1 00:18:32.552 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:32.552 lat (usec) : 500=18.71%, 750=62.95%, 1000=15.12% 00:18:32.552 lat (msec) : 2=0.19%, 50=3.02% 00:18:32.552 cpu : usr=0.89%, sys=1.29%, ctx=529, majf=0, minf=1 00:18:32.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:32.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.552 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:32.552 job2: (groupid=0, jobs=1): err= 0: pid=684247: Mon Jul 15 09:27:19 2024 00:18:32.552 read: IOPS=16, BW=66.2KiB/s (67.8kB/s)(68.0KiB/1027msec) 00:18:32.552 slat (nsec): min=10276, max=26011, avg=24556.76, stdev=3687.56 00:18:32.552 clat (usec): min=1084, max=42669, avg=39599.05, stdev=9926.59 00:18:32.552 lat (usec): min=1094, max=42695, avg=39623.60, stdev=9930.27 00:18:32.552 clat percentiles (usec): 00:18:32.552 | 1.00th=[ 1090], 5.00th=[ 1090], 10.00th=[41681], 20.00th=[41681], 00:18:32.552 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:32.552 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:18:32.552 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:32.552 | 99.99th=[42730] 00:18:32.552 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:18:32.552 slat (nsec): min=9619, max=50784, avg=29638.55, stdev=8208.93 00:18:32.552 clat (usec): min=280, max=973, avg=651.80, stdev=131.64 00:18:32.552 lat (usec): min=290, max=1005, avg=681.44, stdev=134.17 00:18:32.552 clat percentiles (usec): 00:18:32.552 | 1.00th=[ 326], 5.00th=[ 420], 10.00th=[ 469], 20.00th=[ 545], 00:18:32.552 | 30.00th=[ 586], 40.00th=[ 627], 50.00th=[ 668], 60.00th=[ 701], 00:18:32.552 | 70.00th=[ 734], 80.00th=[ 766], 90.00th=[ 816], 95.00th=[ 848], 00:18:32.552 | 99.00th=[ 889], 99.50th=[ 906], 99.90th=[ 971], 99.95th=[ 971], 00:18:32.552 | 99.99th=[ 971] 00:18:32.552 bw ( KiB/s): min= 4096, max= 4096, per=41.08%, avg=4096.00, stdev= 0.00, samples=1 00:18:32.552 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:32.552 lat (usec) : 500=13.61%, 750=59.36%, 1000=23.82% 00:18:32.552 lat (msec) : 2=0.19%, 50=3.02% 00:18:32.552 cpu : usr=0.58%, sys=1.66%, ctx=531, majf=0, minf=1 00:18:32.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:32.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.552 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:32.552 job3: (groupid=0, jobs=1): err= 0: pid=684254: Mon Jul 15 09:27:19 2024 00:18:32.552 read: IOPS=15, BW=62.7KiB/s (64.2kB/s)(64.0KiB/1021msec) 00:18:32.552 slat (nsec): min=25513, max=26633, avg=25782.50, stdev=271.06 00:18:32.552 clat (usec): min=41865, max=42215, avg=41977.50, stdev=84.19 00:18:32.552 lat (usec): min=41891, max=42241, avg=42003.28, stdev=84.12 00:18:32.552 clat percentiles (usec): 00:18:32.552 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:18:32.552 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:32.552 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:32.552 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:32.552 | 99.99th=[42206] 00:18:32.552 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:18:32.552 slat (nsec): min=8751, max=51307, avg=28990.34, stdev=8473.44 00:18:32.552 clat (usec): min=276, max=984, avg=645.23, stdev=132.64 00:18:32.552 lat (usec): min=285, max=1015, avg=674.22, stdev=136.27 00:18:32.552 clat percentiles (usec): 00:18:32.552 | 1.00th=[ 306], 5.00th=[ 408], 10.00th=[ 457], 20.00th=[ 529], 00:18:32.552 | 30.00th=[ 578], 40.00th=[ 627], 50.00th=[ 668], 60.00th=[ 701], 00:18:32.552 | 70.00th=[ 725], 80.00th=[ 758], 90.00th=[ 799], 95.00th=[ 848], 00:18:32.552 | 99.00th=[ 898], 99.50th=[ 914], 99.90th=[ 988], 99.95th=[ 988], 00:18:32.552 | 99.99th=[ 988] 00:18:32.552 bw ( KiB/s): min= 4096, max= 4096, per=41.08%, avg=4096.00, stdev= 0.00, samples=1 00:18:32.552 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:32.552 lat (usec) : 500=14.20%, 750=62.12%, 1000=20.64% 00:18:32.552 lat (msec) : 50=3.03% 00:18:32.552 cpu : usr=0.98%, sys=1.96%, ctx=528, majf=0, minf=1 00:18:32.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:32.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.552 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:32.552 00:18:32.552 Run status group 0 (all jobs): 00:18:32.552 READ: bw=3981KiB/s (4076kB/s), 62.7KiB/s-3880KiB/s (64.2kB/s-3973kB/s), io=4088KiB (4186kB), run=1002-1027msec 00:18:32.552 WRITE: bw=9971KiB/s (10.2MB/s), 1994KiB/s-4088KiB/s (2042kB/s-4186kB/s), io=10.0MiB (10.5MB), run=1002-1027msec 00:18:32.552 00:18:32.552 Disk stats (read/write): 00:18:32.552 nvme0n1: ios=862/1024, merge=0/0, ticks=534/180, in_queue=714, util=88.18% 00:18:32.552 nvme0n2: ios=57/512, merge=0/0, ticks=602/304, in_queue=906, util=96.43% 00:18:32.552 nvme0n3: ios=50/512, merge=0/0, ticks=1333/314, in_queue=1647, util=100.00% 00:18:32.552 nvme0n4: ios=11/512, merge=0/0, ticks=462/268, in_queue=730, util=89.55% 00:18:32.552 09:27:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:32.552 [global] 00:18:32.552 thread=1 00:18:32.552 invalidate=1 00:18:32.552 rw=write 00:18:32.552 time_based=1 00:18:32.552 runtime=1 00:18:32.552 ioengine=libaio 00:18:32.552 direct=1 00:18:32.552 bs=4096 00:18:32.552 iodepth=128 00:18:32.552 norandommap=0 00:18:32.552 numjobs=1 00:18:32.552 00:18:32.552 verify_dump=1 00:18:32.552 verify_backlog=512 00:18:32.552 verify_state_save=0 00:18:32.552 do_verify=1 00:18:32.552 verify=crc32c-intel 00:18:32.552 [job0] 00:18:32.552 filename=/dev/nvme0n1 00:18:32.552 [job1] 00:18:32.552 filename=/dev/nvme0n2 00:18:32.552 [job2] 00:18:32.552 filename=/dev/nvme0n3 00:18:32.552 [job3] 00:18:32.552 filename=/dev/nvme0n4 00:18:32.552 Could not set queue depth (nvme0n1) 00:18:32.552 Could not set queue depth (nvme0n2) 00:18:32.552 Could not set queue depth (nvme0n3) 00:18:32.552 Could not set queue depth (nvme0n4) 00:18:32.812 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:32.812 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:32.812 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:32.812 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:32.812 fio-3.35 00:18:32.812 Starting 4 threads 00:18:34.264 00:18:34.264 job0: (groupid=0, jobs=1): err= 0: pid=684674: Mon Jul 15 09:27:21 2024 00:18:34.264 read: IOPS=3863, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1004msec) 00:18:34.264 slat (nsec): min=919, max=11361k, avg=83166.60, stdev=624110.90 00:18:34.264 clat (usec): min=2503, max=92490, avg=10604.08, stdev=8507.74 00:18:34.264 lat (usec): min=2508, max=92498, avg=10687.25, stdev=8566.45 00:18:34.264 clat percentiles (usec): 00:18:34.264 | 1.00th=[ 3064], 5.00th=[ 3752], 10.00th=[ 4113], 20.00th=[ 5866], 00:18:34.264 | 30.00th=[ 6783], 40.00th=[ 7570], 50.00th=[ 8848], 60.00th=[ 9896], 00:18:34.264 | 70.00th=[10945], 80.00th=[14091], 90.00th=[19006], 95.00th=[21627], 00:18:34.264 | 99.00th=[28967], 99.50th=[84411], 99.90th=[92799], 99.95th=[92799], 00:18:34.264 | 99.99th=[92799] 00:18:34.264 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:18:34.264 slat (nsec): min=1628, max=15550k, avg=149233.62, stdev=934854.44 00:18:34.264 clat (usec): min=1509, max=158893, avg=21059.74, stdev=30475.44 00:18:34.264 lat (usec): min=1532, max=158901, avg=21208.98, stdev=30677.42 00:18:34.264 clat percentiles (msec): 00:18:34.264 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 7], 00:18:34.264 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 11], 60.00th=[ 13], 00:18:34.264 | 70.00th=[ 16], 80.00th=[ 22], 90.00th=[ 60], 95.00th=[ 91], 00:18:34.264 | 99.00th=[ 150], 99.50th=[ 155], 99.90th=[ 159], 99.95th=[ 159], 00:18:34.264 | 99.99th=[ 159] 00:18:34.264 bw ( KiB/s): min=11456, max=21312, per=20.26%, avg=16384.00, stdev=6969.24, samples=2 00:18:34.264 iops : min= 2864, max= 5328, avg=4096.00, stdev=1742.31, samples=2 00:18:34.264 lat (msec) : 2=0.26%, 4=7.49%, 10=47.47%, 20=29.18%, 50=9.50% 00:18:34.264 lat (msec) : 100=3.70%, 250=2.39% 00:18:34.264 cpu : usr=3.99%, sys=3.49%, ctx=479, majf=0, minf=1 00:18:34.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:34.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:34.264 issued rwts: total=3879,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:34.264 job1: (groupid=0, jobs=1): err= 0: pid=684681: Mon Jul 15 09:27:21 2024 00:18:34.264 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:18:34.264 slat (nsec): min=924, max=15952k, avg=99193.31, stdev=779927.03 00:18:34.264 clat (usec): min=4206, max=37483, avg=12588.73, stdev=4955.16 00:18:34.264 lat (usec): min=4213, max=37510, avg=12687.92, stdev=5029.82 00:18:34.264 clat percentiles (usec): 00:18:34.264 | 1.00th=[ 6915], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[ 8848], 00:18:34.264 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[11731], 00:18:34.264 | 70.00th=[14353], 80.00th=[16319], 90.00th=[20317], 95.00th=[22938], 00:18:34.264 | 99.00th=[29230], 99.50th=[29492], 99.90th=[33162], 99.95th=[33817], 00:18:34.264 | 99.99th=[37487] 00:18:34.264 write: IOPS=5000, BW=19.5MiB/s (20.5MB/s)(19.7MiB/1008msec); 0 zone resets 00:18:34.264 slat (nsec): min=1585, max=11931k, avg=103060.00, stdev=709662.03 00:18:34.264 clat (usec): min=1135, max=53782, avg=13851.43, stdev=10296.09 00:18:34.264 lat (usec): min=1146, max=53783, avg=13954.49, stdev=10355.56 00:18:34.264 clat percentiles (usec): 00:18:34.264 | 1.00th=[ 4080], 5.00th=[ 5276], 10.00th=[ 6325], 20.00th=[ 7832], 00:18:34.264 | 30.00th=[ 8291], 40.00th=[ 8979], 50.00th=[10421], 60.00th=[11469], 00:18:34.264 | 70.00th=[14353], 80.00th=[16188], 90.00th=[28705], 95.00th=[39584], 00:18:34.264 | 99.00th=[53216], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:18:34.264 | 99.99th=[53740] 00:18:34.264 bw ( KiB/s): min=18808, max=20496, per=24.30%, avg=19652.00, stdev=1193.60, samples=2 00:18:34.264 iops : min= 4702, max= 5124, avg=4913.00, stdev=298.40, samples=2 00:18:34.264 lat (msec) : 2=0.02%, 4=0.48%, 10=45.71%, 20=41.70%, 50=11.18% 00:18:34.264 lat (msec) : 100=0.90% 00:18:34.264 cpu : usr=4.27%, sys=4.67%, ctx=324, majf=0, minf=1 00:18:34.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:34.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:34.264 issued rwts: total=4608,5041,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:34.264 job2: (groupid=0, jobs=1): err= 0: pid=684688: Mon Jul 15 09:27:21 2024 00:18:34.264 read: IOPS=6152, BW=24.0MiB/s (25.2MB/s)(24.2MiB/1009msec) 00:18:34.264 slat (nsec): min=871, max=14499k, avg=67006.24, stdev=439923.63 00:18:34.264 clat (usec): min=2797, max=33163, avg=8673.86, stdev=3228.52 00:18:34.264 lat (usec): min=2808, max=34819, avg=8740.86, stdev=3252.09 00:18:34.264 clat percentiles (usec): 00:18:34.264 | 1.00th=[ 4424], 5.00th=[ 5211], 10.00th=[ 6194], 20.00th=[ 6652], 00:18:34.264 | 30.00th=[ 6915], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8455], 00:18:34.264 | 70.00th=[ 8979], 80.00th=[10159], 90.00th=[11207], 95.00th=[13829], 00:18:34.264 | 99.00th=[24511], 99.50th=[24773], 99.90th=[31065], 99.95th=[33162], 00:18:34.264 | 99.99th=[33162] 00:18:34.264 write: IOPS=6596, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1009msec); 0 zone resets 00:18:34.264 slat (nsec): min=1544, max=23363k, avg=80367.20, stdev=567708.01 00:18:34.264 clat (usec): min=446, max=78085, avg=11100.68, stdev=10051.07 00:18:34.264 lat (usec): min=477, max=78093, avg=11181.05, stdev=10104.30 00:18:34.264 clat percentiles (usec): 00:18:34.264 | 1.00th=[ 2147], 5.00th=[ 4424], 10.00th=[ 5866], 20.00th=[ 6456], 00:18:34.264 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 7767], 60.00th=[ 9110], 00:18:34.264 | 70.00th=[11600], 80.00th=[13960], 90.00th=[16712], 95.00th=[26346], 00:18:34.264 | 99.00th=[70779], 99.50th=[72877], 99.90th=[78119], 99.95th=[78119], 00:18:34.264 | 99.99th=[78119] 00:18:34.264 bw ( KiB/s): min=24064, max=28672, per=32.60%, avg=26368.00, stdev=3258.35, samples=2 00:18:34.264 iops : min= 6016, max= 7168, avg=6592.00, stdev=814.59, samples=2 00:18:34.264 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.09% 00:18:34.264 lat (msec) : 2=0.28%, 4=1.70%, 10=69.02%, 20=23.94%, 50=4.03% 00:18:34.264 lat (msec) : 100=0.87% 00:18:34.264 cpu : usr=4.27%, sys=6.15%, ctx=776, majf=0, minf=1 00:18:34.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:18:34.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:34.265 issued rwts: total=6208,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:34.265 job3: (groupid=0, jobs=1): err= 0: pid=684695: Mon Jul 15 09:27:21 2024 00:18:34.265 read: IOPS=4530, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1004msec) 00:18:34.265 slat (nsec): min=925, max=23458k, avg=111465.08, stdev=893553.12 00:18:34.265 clat (msec): min=3, max=125, avg=14.29, stdev=11.61 00:18:34.265 lat (msec): min=3, max=125, avg=14.41, stdev=11.75 00:18:34.265 clat percentiles (msec): 00:18:34.265 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:18:34.265 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:18:34.265 | 70.00th=[ 14], 80.00th=[ 20], 90.00th=[ 24], 95.00th=[ 29], 00:18:34.265 | 99.00th=[ 75], 99.50th=[ 102], 99.90th=[ 126], 99.95th=[ 126], 00:18:34.265 | 99.99th=[ 126] 00:18:34.265 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:18:34.265 slat (nsec): min=1605, max=16625k, avg=91274.30, stdev=637019.07 00:18:34.265 clat (usec): min=355, max=125200, avg=13506.63, stdev=15647.10 00:18:34.265 lat (usec): min=365, max=125226, avg=13597.91, stdev=15717.43 00:18:34.265 clat percentiles (usec): 00:18:34.265 | 1.00th=[ 816], 5.00th=[ 3163], 10.00th=[ 5211], 20.00th=[ 6849], 00:18:34.265 | 30.00th=[ 7504], 40.00th=[ 8291], 50.00th=[ 9503], 60.00th=[ 11994], 00:18:34.265 | 70.00th=[ 13304], 80.00th=[ 15533], 90.00th=[ 20055], 95.00th=[ 29754], 00:18:34.265 | 99.00th=[112722], 99.50th=[116917], 99.90th=[121111], 99.95th=[121111], 00:18:34.265 | 99.99th=[125305] 00:18:34.265 bw ( KiB/s): min=16384, max=20480, per=22.79%, avg=18432.00, stdev=2896.31, samples=2 00:18:34.265 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:18:34.265 lat (usec) : 500=0.12%, 750=0.22%, 1000=0.34% 00:18:34.265 lat (msec) : 2=0.15%, 4=2.77%, 10=37.15%, 20=45.08%, 50=12.07% 00:18:34.265 lat (msec) : 100=0.97%, 250=1.12% 00:18:34.265 cpu : usr=3.39%, sys=5.38%, ctx=334, majf=0, minf=1 00:18:34.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:34.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:34.265 issued rwts: total=4549,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:34.265 00:18:34.265 Run status group 0 (all jobs): 00:18:34.265 READ: bw=74.5MiB/s (78.1MB/s), 15.1MiB/s-24.0MiB/s (15.8MB/s-25.2MB/s), io=75.2MiB (78.8MB), run=1004-1009msec 00:18:34.265 WRITE: bw=79.0MiB/s (82.8MB/s), 15.9MiB/s-25.8MiB/s (16.7MB/s-27.0MB/s), io=79.7MiB (83.6MB), run=1004-1009msec 00:18:34.265 00:18:34.265 Disk stats (read/write): 00:18:34.265 nvme0n1: ios=3118/3415, merge=0/0, ticks=32529/67596, in_queue=100125, util=97.90% 00:18:34.265 nvme0n2: ios=3599/3692, merge=0/0, ticks=45358/56992, in_queue=102350, util=91.34% 00:18:34.265 nvme0n3: ios=5820/6144, merge=0/0, ticks=24087/31738, in_queue=55825, util=87.75% 00:18:34.265 nvme0n4: ios=3631/3864, merge=0/0, ticks=43606/43041, in_queue=86647, util=99.68% 00:18:34.265 09:27:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:34.265 [global] 00:18:34.265 thread=1 00:18:34.265 invalidate=1 00:18:34.265 rw=randwrite 00:18:34.265 time_based=1 00:18:34.265 runtime=1 00:18:34.265 ioengine=libaio 00:18:34.265 direct=1 00:18:34.265 bs=4096 00:18:34.265 iodepth=128 00:18:34.265 norandommap=0 00:18:34.265 numjobs=1 00:18:34.265 00:18:34.265 verify_dump=1 00:18:34.265 verify_backlog=512 00:18:34.265 verify_state_save=0 00:18:34.265 do_verify=1 00:18:34.265 verify=crc32c-intel 00:18:34.265 [job0] 00:18:34.265 filename=/dev/nvme0n1 00:18:34.265 [job1] 00:18:34.265 filename=/dev/nvme0n2 00:18:34.265 [job2] 00:18:34.265 filename=/dev/nvme0n3 00:18:34.265 [job3] 00:18:34.265 filename=/dev/nvme0n4 00:18:34.265 Could not set queue depth (nvme0n1) 00:18:34.265 Could not set queue depth (nvme0n2) 00:18:34.265 Could not set queue depth (nvme0n3) 00:18:34.265 Could not set queue depth (nvme0n4) 00:18:34.530 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:34.530 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:34.531 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:34.531 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:34.531 fio-3.35 00:18:34.531 Starting 4 threads 00:18:35.919 00:18:35.919 job0: (groupid=0, jobs=1): err= 0: pid=685191: Mon Jul 15 09:27:22 2024 00:18:35.919 read: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(39.5MiB/1005msec) 00:18:35.919 slat (nsec): min=853, max=7439.1k, avg=47517.17, stdev=359895.60 00:18:35.919 clat (usec): min=1389, max=21093, avg=6411.43, stdev=1836.77 00:18:35.919 lat (usec): min=2063, max=21101, avg=6458.95, stdev=1859.60 00:18:35.919 clat percentiles (usec): 00:18:35.919 | 1.00th=[ 3326], 5.00th=[ 4490], 10.00th=[ 4817], 20.00th=[ 5211], 00:18:35.919 | 30.00th=[ 5604], 40.00th=[ 5735], 50.00th=[ 6063], 60.00th=[ 6325], 00:18:35.919 | 70.00th=[ 6652], 80.00th=[ 7242], 90.00th=[ 8586], 95.00th=[ 9503], 00:18:35.919 | 99.00th=[13960], 99.50th=[16319], 99.90th=[20317], 99.95th=[20317], 00:18:35.919 | 99.99th=[21103] 00:18:35.919 write: IOPS=10.2k, BW=39.8MiB/s (41.7MB/s)(40.0MiB/1005msec); 0 zone resets 00:18:35.919 slat (nsec): min=1473, max=5304.2k, avg=44895.07, stdev=282141.93 00:18:35.919 clat (usec): min=621, max=25306, avg=6114.10, stdev=3351.63 00:18:35.919 lat (usec): min=628, max=25315, avg=6158.99, stdev=3372.87 00:18:35.919 clat percentiles (usec): 00:18:35.919 | 1.00th=[ 1696], 5.00th=[ 2999], 10.00th=[ 3392], 20.00th=[ 3982], 00:18:35.919 | 30.00th=[ 4883], 40.00th=[ 5276], 50.00th=[ 5604], 60.00th=[ 5800], 00:18:35.919 | 70.00th=[ 5932], 80.00th=[ 6587], 90.00th=[ 8848], 95.00th=[14091], 00:18:35.919 | 99.00th=[21103], 99.50th=[22414], 99.90th=[25035], 99.95th=[25297], 00:18:35.919 | 99.99th=[25297] 00:18:35.919 bw ( KiB/s): min=36864, max=45056, per=45.86%, avg=40960.00, stdev=5792.62, samples=2 00:18:35.919 iops : min= 9216, max=11264, avg=10240.00, stdev=1448.15, samples=2 00:18:35.919 lat (usec) : 750=0.05%, 1000=0.06% 00:18:35.919 lat (msec) : 2=0.55%, 4=11.12%, 10=82.29%, 20=5.22%, 50=0.71% 00:18:35.919 cpu : usr=5.98%, sys=9.56%, ctx=796, majf=0, minf=1 00:18:35.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:18:35.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:35.919 issued rwts: total=10105,10240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:35.919 job1: (groupid=0, jobs=1): err= 0: pid=685199: Mon Jul 15 09:27:22 2024 00:18:35.919 read: IOPS=3493, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1006msec) 00:18:35.919 slat (nsec): min=903, max=27803k, avg=154102.13, stdev=1181421.56 00:18:35.919 clat (usec): min=1200, max=64577, avg=19686.75, stdev=10982.42 00:18:35.919 lat (usec): min=5663, max=64602, avg=19840.85, stdev=11087.00 00:18:35.919 clat percentiles (usec): 00:18:35.919 | 1.00th=[ 6390], 5.00th=[ 7963], 10.00th=[ 8291], 20.00th=[ 8979], 00:18:35.919 | 30.00th=[11994], 40.00th=[14877], 50.00th=[16057], 60.00th=[19792], 00:18:35.919 | 70.00th=[25035], 80.00th=[28181], 90.00th=[35390], 95.00th=[38536], 00:18:35.919 | 99.00th=[52691], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:18:35.919 | 99.99th=[64750] 00:18:35.919 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:18:35.920 slat (nsec): min=1557, max=12402k, avg=122751.79, stdev=812210.71 00:18:35.920 clat (usec): min=6614, max=63423, avg=16131.89, stdev=9148.86 00:18:35.920 lat (usec): min=6616, max=63432, avg=16254.65, stdev=9220.06 00:18:35.920 clat percentiles (usec): 00:18:35.920 | 1.00th=[ 7832], 5.00th=[ 8455], 10.00th=[ 8455], 20.00th=[ 8717], 00:18:35.920 | 30.00th=[ 9765], 40.00th=[11469], 50.00th=[14091], 60.00th=[14484], 00:18:35.920 | 70.00th=[19530], 80.00th=[22676], 90.00th=[26084], 95.00th=[34341], 00:18:35.920 | 99.00th=[51119], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:18:35.920 | 99.99th=[63177] 00:18:35.920 bw ( KiB/s): min=14160, max=14512, per=16.05%, avg=14336.00, stdev=248.90, samples=2 00:18:35.920 iops : min= 3540, max= 3628, avg=3584.00, stdev=62.23, samples=2 00:18:35.920 lat (msec) : 2=0.01%, 10=28.02%, 20=39.59%, 50=30.39%, 100=1.99% 00:18:35.920 cpu : usr=1.99%, sys=4.58%, ctx=254, majf=0, minf=1 00:18:35.920 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:35.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:35.920 issued rwts: total=3514,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.920 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:35.920 job2: (groupid=0, jobs=1): err= 0: pid=685208: Mon Jul 15 09:27:22 2024 00:18:35.920 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:18:35.920 slat (nsec): min=899, max=10559k, avg=82554.54, stdev=603118.36 00:18:35.920 clat (usec): min=3200, max=38466, avg=11057.84, stdev=3673.62 00:18:35.920 lat (usec): min=3207, max=38474, avg=11140.39, stdev=3722.88 00:18:35.920 clat percentiles (usec): 00:18:35.920 | 1.00th=[ 5276], 5.00th=[ 7242], 10.00th=[ 7504], 20.00th=[ 8455], 00:18:35.920 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10290], 60.00th=[10945], 00:18:35.920 | 70.00th=[12125], 80.00th=[12911], 90.00th=[15401], 95.00th=[17433], 00:18:35.920 | 99.00th=[26346], 99.50th=[30802], 99.90th=[34866], 99.95th=[38536], 00:18:35.920 | 99.99th=[38536] 00:18:35.920 write: IOPS=6054, BW=23.7MiB/s (24.8MB/s)(23.7MiB/1004msec); 0 zone resets 00:18:35.920 slat (nsec): min=1567, max=8725.0k, avg=76690.38, stdev=509881.25 00:18:35.920 clat (usec): min=572, max=38478, avg=10722.84, stdev=6949.58 00:18:35.920 lat (usec): min=589, max=38502, avg=10799.53, stdev=6997.67 00:18:35.920 clat percentiles (usec): 00:18:35.920 | 1.00th=[ 2573], 5.00th=[ 4490], 10.00th=[ 5538], 20.00th=[ 6390], 00:18:35.920 | 30.00th=[ 6849], 40.00th=[ 7242], 50.00th=[ 8160], 60.00th=[10028], 00:18:35.920 | 70.00th=[10945], 80.00th=[12780], 90.00th=[20317], 95.00th=[31065], 00:18:35.920 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:18:35.920 | 99.99th=[38536] 00:18:35.920 bw ( KiB/s): min=23632, max=23984, per=26.66%, avg=23808.00, stdev=248.90, samples=2 00:18:35.920 iops : min= 5908, max= 5996, avg=5952.00, stdev=62.23, samples=2 00:18:35.920 lat (usec) : 750=0.01%, 1000=0.03% 00:18:35.920 lat (msec) : 2=0.39%, 4=1.09%, 10=49.99%, 20=41.63%, 50=6.87% 00:18:35.920 cpu : usr=4.69%, sys=5.98%, ctx=371, majf=0, minf=1 00:18:35.920 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:35.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:35.920 issued rwts: total=5632,6079,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.920 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:35.920 job3: (groupid=0, jobs=1): err= 0: pid=685209: Mon Jul 15 09:27:22 2024 00:18:35.920 read: IOPS=2408, BW=9634KiB/s (9865kB/s)(9692KiB/1006msec) 00:18:35.920 slat (nsec): min=918, max=17540k, avg=152027.27, stdev=929615.08 00:18:35.920 clat (usec): min=1195, max=49149, avg=20058.05, stdev=8238.60 00:18:35.920 lat (usec): min=5926, max=53360, avg=20210.07, stdev=8294.66 00:18:35.920 clat percentiles (usec): 00:18:35.920 | 1.00th=[ 8717], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[13960], 00:18:35.920 | 30.00th=[15008], 40.00th=[15795], 50.00th=[17433], 60.00th=[20841], 00:18:35.920 | 70.00th=[24249], 80.00th=[26870], 90.00th=[32113], 95.00th=[35914], 00:18:35.920 | 99.00th=[41157], 99.50th=[46924], 99.90th=[49021], 99.95th=[49021], 00:18:35.920 | 99.99th=[49021] 00:18:35.920 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:18:35.920 slat (nsec): min=1581, max=12928k, avg=242606.13, stdev=1103686.18 00:18:35.920 clat (usec): min=5253, max=84124, avg=30761.19, stdev=21244.24 00:18:35.920 lat (usec): min=5263, max=84132, avg=31003.80, stdev=21402.56 00:18:35.920 clat percentiles (usec): 00:18:35.920 | 1.00th=[11076], 5.00th=[12780], 10.00th=[12911], 20.00th=[14091], 00:18:35.920 | 30.00th=[14746], 40.00th=[20579], 50.00th=[24249], 60.00th=[26084], 00:18:35.920 | 70.00th=[31065], 80.00th=[43779], 90.00th=[76022], 95.00th=[82314], 00:18:35.920 | 99.00th=[83362], 99.50th=[84411], 99.90th=[84411], 99.95th=[84411], 00:18:35.920 | 99.99th=[84411] 00:18:35.920 bw ( KiB/s): min= 8312, max=12168, per=11.46%, avg=10240.00, stdev=2726.60, samples=2 00:18:35.920 iops : min= 2078, max= 3042, avg=2560.00, stdev=681.65, samples=2 00:18:35.920 lat (msec) : 2=0.02%, 10=5.86%, 20=41.26%, 50=45.09%, 100=7.77% 00:18:35.920 cpu : usr=2.09%, sys=2.99%, ctx=295, majf=0, minf=1 00:18:35.920 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:18:35.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:35.920 issued rwts: total=2423,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.920 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:35.920 00:18:35.920 Run status group 0 (all jobs): 00:18:35.920 READ: bw=84.2MiB/s (88.2MB/s), 9634KiB/s-39.3MiB/s (9865kB/s-41.2MB/s), io=84.7MiB (88.8MB), run=1004-1006msec 00:18:35.920 WRITE: bw=87.2MiB/s (91.5MB/s), 9.94MiB/s-39.8MiB/s (10.4MB/s-41.7MB/s), io=87.7MiB (92.0MB), run=1004-1006msec 00:18:35.920 00:18:35.920 Disk stats (read/write): 00:18:35.920 nvme0n1: ios=8242/8263, merge=0/0, ticks=49797/49790, in_queue=99587, util=87.07% 00:18:35.920 nvme0n2: ios=3092/3294, merge=0/0, ticks=27779/24054, in_queue=51833, util=96.31% 00:18:35.920 nvme0n3: ios=4654/5119, merge=0/0, ticks=47156/54274, in_queue=101430, util=100.00% 00:18:35.920 nvme0n4: ios=2085/2115, merge=0/0, ticks=18254/33333, in_queue=51587, util=95.71% 00:18:35.920 09:27:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:35.920 09:27:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=685519 00:18:35.920 09:27:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:35.920 09:27:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:35.920 [global] 00:18:35.920 thread=1 00:18:35.920 invalidate=1 00:18:35.920 rw=read 00:18:35.920 time_based=1 00:18:35.920 runtime=10 00:18:35.920 ioengine=libaio 00:18:35.920 direct=1 00:18:35.920 bs=4096 00:18:35.920 iodepth=1 00:18:35.920 norandommap=1 00:18:35.920 numjobs=1 00:18:35.920 00:18:35.920 [job0] 00:18:35.920 filename=/dev/nvme0n1 00:18:35.920 [job1] 00:18:35.920 filename=/dev/nvme0n2 00:18:35.920 [job2] 00:18:35.920 filename=/dev/nvme0n3 00:18:35.920 [job3] 00:18:35.920 filename=/dev/nvme0n4 00:18:35.920 Could not set queue depth (nvme0n1) 00:18:35.920 Could not set queue depth (nvme0n2) 00:18:35.920 Could not set queue depth (nvme0n3) 00:18:35.920 Could not set queue depth (nvme0n4) 00:18:36.197 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:36.197 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:36.197 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:36.197 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:36.197 fio-3.35 00:18:36.197 Starting 4 threads 00:18:38.741 09:27:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:39.002 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=253952, buflen=4096 00:18:39.002 fio: pid=685718, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:39.002 09:27:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:39.002 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=626688, buflen=4096 00:18:39.002 fio: pid=685717, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:39.002 09:27:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:39.002 09:27:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:39.263 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=3321856, buflen=4096 00:18:39.263 fio: pid=685713, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:39.263 09:27:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:39.263 09:27:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:39.525 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=9072640, buflen=4096 00:18:39.525 fio: pid=685714, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:39.525 09:27:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:39.525 09:27:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:39.525 00:18:39.525 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=685713: Mon Jul 15 09:27:26 2024 00:18:39.525 read: IOPS=279, BW=1118KiB/s (1145kB/s)(3244KiB/2902msec) 00:18:39.525 slat (usec): min=6, max=14566, avg=41.48, stdev=510.36 00:18:39.525 clat (usec): min=615, max=43048, avg=3529.22, stdev=9647.95 00:18:39.525 lat (usec): min=622, max=43072, avg=3570.72, stdev=9656.77 00:18:39.525 clat percentiles (usec): 00:18:39.525 | 1.00th=[ 783], 5.00th=[ 881], 10.00th=[ 979], 20.00th=[ 1037], 00:18:39.525 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1156], 00:18:39.525 | 70.00th=[ 1188], 80.00th=[ 1205], 90.00th=[ 1254], 95.00th=[41157], 00:18:39.525 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:18:39.525 | 99.99th=[43254] 00:18:39.525 bw ( KiB/s): min= 912, max= 1616, per=27.52%, avg=1168.00, stdev=285.55, samples=5 00:18:39.525 iops : min= 228, max= 404, avg=292.00, stdev=71.39, samples=5 00:18:39.525 lat (usec) : 750=0.49%, 1000=12.93% 00:18:39.525 lat (msec) : 2=80.54%, 50=5.91% 00:18:39.525 cpu : usr=0.28%, sys=0.86%, ctx=813, majf=0, minf=1 00:18:39.525 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:39.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.525 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.525 issued rwts: total=812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.525 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:39.525 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=685714: Mon Jul 15 09:27:26 2024 00:18:39.525 read: IOPS=725, BW=2900KiB/s (2970kB/s)(8860KiB/3055msec) 00:18:39.525 slat (usec): min=7, max=18171, avg=51.23, stdev=642.06 00:18:39.525 clat (usec): min=647, max=42918, avg=1320.91, stdev=2754.85 00:18:39.525 lat (usec): min=671, max=42942, avg=1372.15, stdev=2827.39 00:18:39.525 clat percentiles (usec): 00:18:39.525 | 1.00th=[ 873], 5.00th=[ 988], 10.00th=[ 1029], 20.00th=[ 1074], 00:18:39.525 | 30.00th=[ 1123], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1172], 00:18:39.525 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1237], 00:18:39.525 | 99.00th=[ 1303], 99.50th=[ 1385], 99.90th=[42730], 99.95th=[42730], 00:18:39.525 | 99.99th=[42730] 00:18:39.525 bw ( KiB/s): min= 2168, max= 3488, per=67.68%, avg=2872.00, stdev=580.68, samples=5 00:18:39.525 iops : min= 542, max= 872, avg=718.00, stdev=145.17, samples=5 00:18:39.525 lat (usec) : 750=0.09%, 1000=6.05% 00:18:39.525 lat (msec) : 2=93.37%, 50=0.45% 00:18:39.525 cpu : usr=0.72%, sys=2.16%, ctx=2220, majf=0, minf=1 00:18:39.525 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:39.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.525 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.525 issued rwts: total=2216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.525 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:39.525 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=685717: Mon Jul 15 09:27:26 2024 00:18:39.525 read: IOPS=56, BW=224KiB/s (229kB/s)(612KiB/2735msec) 00:18:39.525 slat (nsec): min=6722, max=54432, avg=25062.01, stdev=4532.32 00:18:39.525 clat (usec): min=626, max=43095, avg=17836.47, stdev=20208.95 00:18:39.525 lat (usec): min=633, max=43121, avg=17861.53, stdev=20209.26 00:18:39.525 clat percentiles (usec): 00:18:39.525 | 1.00th=[ 685], 5.00th=[ 865], 10.00th=[ 906], 20.00th=[ 979], 00:18:39.525 | 30.00th=[ 996], 40.00th=[ 1020], 50.00th=[ 1045], 60.00th=[41157], 00:18:39.525 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:18:39.525 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:18:39.525 | 99.99th=[43254] 00:18:39.525 bw ( KiB/s): min= 96, max= 616, per=5.54%, avg=235.20, stdev=218.55, samples=5 00:18:39.525 iops : min= 24, max= 154, avg=58.80, stdev=54.64, samples=5 00:18:39.525 lat (usec) : 750=2.60%, 1000=27.27% 00:18:39.525 lat (msec) : 2=27.92%, 10=0.65%, 50=40.91% 00:18:39.525 cpu : usr=0.00%, sys=0.33%, ctx=154, majf=0, minf=1 00:18:39.525 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:39.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.525 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.525 issued rwts: total=154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.525 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:39.525 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=685718: Mon Jul 15 09:27:26 2024 00:18:39.525 read: IOPS=24, BW=96.1KiB/s (98.4kB/s)(248KiB/2580msec) 00:18:39.525 slat (nsec): min=24962, max=40357, avg=25722.59, stdev=1888.77 00:18:39.525 clat (usec): min=902, max=43042, avg=41507.75, stdev=5259.45 00:18:39.525 lat (usec): min=943, max=43067, avg=41533.48, stdev=5257.56 00:18:39.525 clat percentiles (usec): 00:18:39.525 | 1.00th=[ 906], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:18:39.525 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:39.525 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:18:39.525 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:18:39.525 | 99.99th=[43254] 00:18:39.525 bw ( KiB/s): min= 96, max= 96, per=2.26%, avg=96.00, stdev= 0.00, samples=5 00:18:39.525 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:18:39.525 lat (usec) : 1000=1.59% 00:18:39.526 lat (msec) : 50=96.83% 00:18:39.526 cpu : usr=0.00%, sys=0.12%, ctx=65, majf=0, minf=2 00:18:39.526 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:39.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.526 complete : 0=1.6%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.526 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.526 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:39.526 00:18:39.526 Run status group 0 (all jobs): 00:18:39.526 READ: bw=4244KiB/s (4345kB/s), 96.1KiB/s-2900KiB/s (98.4kB/s-2970kB/s), io=12.7MiB (13.3MB), run=2580-3055msec 00:18:39.526 00:18:39.526 Disk stats (read/write): 00:18:39.526 nvme0n1: ios=804/0, merge=0/0, ticks=2748/0, in_queue=2748, util=94.36% 00:18:39.526 nvme0n2: ios=2081/0, merge=0/0, ticks=2693/0, in_queue=2693, util=95.33% 00:18:39.526 nvme0n3: ios=149/0, merge=0/0, ticks=2561/0, in_queue=2561, util=96.07% 00:18:39.526 nvme0n4: ios=95/0, merge=0/0, ticks=3184/0, in_queue=3184, util=98.56% 00:18:39.526 09:27:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:39.526 09:27:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:39.787 09:27:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:39.787 09:27:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:40.048 09:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:40.048 09:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:40.048 09:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:40.048 09:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:40.310 09:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:40.310 09:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 685519 00:18:40.310 09:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:40.310 09:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:40.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:40.310 09:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:40.310 09:27:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:18:40.310 09:27:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:40.310 09:27:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:40.310 09:27:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:40.310 09:27:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:40.310 09:27:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:18:40.310 09:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:40.310 09:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:40.310 nvmf hotplug test: fio failed as expected 00:18:40.310 09:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:40.571 rmmod nvme_tcp 00:18:40.571 rmmod nvme_fabrics 00:18:40.571 rmmod nvme_keyring 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 681603 ']' 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 681603 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 681603 ']' 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 681603 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 681603 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 681603' 00:18:40.571 killing process with pid 681603 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 681603 00:18:40.571 09:27:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 681603 00:18:40.832 09:27:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:40.832 09:27:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:40.832 09:27:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:40.832 09:27:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:40.832 09:27:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:40.832 09:27:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.832 09:27:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.832 09:27:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.379 09:27:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:43.379 00:18:43.379 real 0m29.276s 00:18:43.379 user 2m37.987s 00:18:43.379 sys 0m9.398s 00:18:43.379 09:27:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:43.379 09:27:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.379 ************************************ 00:18:43.379 END TEST nvmf_fio_target 00:18:43.379 ************************************ 00:18:43.379 09:27:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:43.379 09:27:30 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:43.379 09:27:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:43.379 09:27:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:43.379 09:27:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:43.379 ************************************ 00:18:43.379 START TEST nvmf_bdevio 00:18:43.379 ************************************ 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:43.379 * Looking for test storage... 00:18:43.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.379 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:43.380 09:27:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:51.513 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:51.513 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:51.513 Found net devices under 0000:31:00.0: cvl_0_0 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.513 09:27:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.513 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:51.513 Found net devices under 0000:31:00.1: cvl_0_1 00:18:51.513 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.513 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:51.513 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:51.513 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:51.513 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:51.513 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:51.513 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:51.513 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:51.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:18:51.514 00:18:51.514 --- 10.0.0.2 ping statistics --- 00:18:51.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.514 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:51.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:18:51.514 00:18:51.514 --- 10.0.0.1 ping statistics --- 00:18:51.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.514 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=691397 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 691397 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 691397 ']' 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:51.514 09:27:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:51.514 [2024-07-15 09:27:38.385551] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:18:51.514 [2024-07-15 09:27:38.385601] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.514 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.514 [2024-07-15 09:27:38.475048] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:51.514 [2024-07-15 09:27:38.552193] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.514 [2024-07-15 09:27:38.552241] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.514 [2024-07-15 09:27:38.552249] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.514 [2024-07-15 09:27:38.552255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.514 [2024-07-15 09:27:38.552261] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.514 [2024-07-15 09:27:38.552453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:51.514 [2024-07-15 09:27:38.552609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:51.514 [2024-07-15 09:27:38.552818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:51.514 [2024-07-15 09:27:38.552842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:52.085 [2024-07-15 09:27:39.229030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:52.085 Malloc0 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.085 09:27:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:52.346 09:27:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.346 09:27:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:52.346 09:27:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.346 09:27:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:52.346 [2024-07-15 09:27:39.294199] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.346 09:27:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.346 09:27:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:52.346 09:27:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:52.346 09:27:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:52.346 09:27:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:52.346 09:27:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:52.346 09:27:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:52.346 { 00:18:52.346 "params": { 00:18:52.346 "name": "Nvme$subsystem", 00:18:52.346 "trtype": "$TEST_TRANSPORT", 00:18:52.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.346 "adrfam": "ipv4", 00:18:52.346 "trsvcid": "$NVMF_PORT", 00:18:52.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.346 "hdgst": ${hdgst:-false}, 00:18:52.346 "ddgst": ${ddgst:-false} 00:18:52.346 }, 00:18:52.346 "method": "bdev_nvme_attach_controller" 00:18:52.346 } 00:18:52.346 EOF 00:18:52.346 )") 00:18:52.346 09:27:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:52.346 09:27:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:52.346 09:27:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:52.346 09:27:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:52.346 "params": { 00:18:52.346 "name": "Nvme1", 00:18:52.346 "trtype": "tcp", 00:18:52.346 "traddr": "10.0.0.2", 00:18:52.346 "adrfam": "ipv4", 00:18:52.346 "trsvcid": "4420", 00:18:52.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.346 "hdgst": false, 00:18:52.346 "ddgst": false 00:18:52.346 }, 00:18:52.346 "method": "bdev_nvme_attach_controller" 00:18:52.346 }' 00:18:52.346 [2024-07-15 09:27:39.357414] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:18:52.346 [2024-07-15 09:27:39.357502] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid691446 ] 00:18:52.346 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.346 [2024-07-15 09:27:39.430198] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:52.346 [2024-07-15 09:27:39.506617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.346 [2024-07-15 09:27:39.506737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.346 [2024-07-15 09:27:39.506741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.915 I/O targets: 00:18:52.915 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:52.915 00:18:52.915 00:18:52.915 CUnit - A unit testing framework for C - Version 2.1-3 00:18:52.915 http://cunit.sourceforge.net/ 00:18:52.915 00:18:52.915 00:18:52.915 Suite: bdevio tests on: Nvme1n1 00:18:52.915 Test: blockdev write read block ...passed 00:18:52.915 Test: blockdev write zeroes read block ...passed 00:18:52.915 Test: blockdev write zeroes read no split ...passed 00:18:52.915 Test: blockdev write zeroes read split ...passed 00:18:52.915 Test: blockdev write zeroes read split partial ...passed 00:18:52.915 Test: blockdev reset ...[2024-07-15 09:27:39.977404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:52.915 [2024-07-15 09:27:39.977473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca7370 (9): Bad file descriptor 00:18:52.915 [2024-07-15 09:27:40.074790] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:52.915 passed 00:18:53.175 Test: blockdev write read 8 blocks ...passed 00:18:53.175 Test: blockdev write read size > 128k ...passed 00:18:53.175 Test: blockdev write read invalid size ...passed 00:18:53.175 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:53.175 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:53.175 Test: blockdev write read max offset ...passed 00:18:53.175 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:53.175 Test: blockdev writev readv 8 blocks ...passed 00:18:53.175 Test: blockdev writev readv 30 x 1block ...passed 00:18:53.175 Test: blockdev writev readv block ...passed 00:18:53.175 Test: blockdev writev readv size > 128k ...passed 00:18:53.175 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:53.175 Test: blockdev comparev and writev ...[2024-07-15 09:27:40.339606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.175 [2024-07-15 09:27:40.339632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:53.175 [2024-07-15 09:27:40.339643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.175 [2024-07-15 09:27:40.339649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:53.175 [2024-07-15 09:27:40.340124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.175 [2024-07-15 09:27:40.340133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:53.175 [2024-07-15 09:27:40.340143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.175 [2024-07-15 09:27:40.340148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:53.175 [2024-07-15 09:27:40.340631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.175 [2024-07-15 09:27:40.340639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:53.175 [2024-07-15 09:27:40.340653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.175 [2024-07-15 09:27:40.340658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:53.175 [2024-07-15 09:27:40.341134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.175 [2024-07-15 09:27:40.341142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:53.175 [2024-07-15 09:27:40.341151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.175 [2024-07-15 09:27:40.341156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:53.435 passed 00:18:53.435 Test: blockdev nvme passthru rw ...passed 00:18:53.435 Test: blockdev nvme passthru vendor specific ...[2024-07-15 09:27:40.426696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.435 [2024-07-15 09:27:40.426708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:53.435 [2024-07-15 09:27:40.427077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.435 [2024-07-15 09:27:40.427086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:53.435 [2024-07-15 09:27:40.427476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.435 [2024-07-15 09:27:40.427485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:53.435 [2024-07-15 09:27:40.427870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.435 [2024-07-15 09:27:40.427877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:53.435 passed 00:18:53.435 Test: blockdev nvme admin passthru ...passed 00:18:53.435 Test: blockdev copy ...passed 00:18:53.435 00:18:53.435 Run Summary: Type Total Ran Passed Failed Inactive 00:18:53.435 suites 1 1 n/a 0 0 00:18:53.435 tests 23 23 23 0 0 00:18:53.435 asserts 152 152 152 0 n/a 00:18:53.435 00:18:53.435 Elapsed time = 1.342 seconds 00:18:53.435 09:27:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:53.435 09:27:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.435 09:27:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:53.435 09:27:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.435 09:27:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:53.435 09:27:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:53.435 09:27:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:53.435 09:27:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:53.435 09:27:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:53.435 09:27:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:53.435 09:27:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:53.435 09:27:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:53.435 rmmod nvme_tcp 00:18:53.695 rmmod nvme_fabrics 00:18:53.695 rmmod nvme_keyring 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 691397 ']' 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 691397 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 691397 ']' 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 691397 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 691397 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 691397' 00:18:53.695 killing process with pid 691397 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 691397 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 691397 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.695 09:27:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.237 09:27:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:56.237 00:18:56.237 real 0m12.900s 00:18:56.237 user 0m14.097s 00:18:56.237 sys 0m6.681s 00:18:56.237 09:27:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:56.237 09:27:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:56.237 ************************************ 00:18:56.237 END TEST nvmf_bdevio 00:18:56.237 ************************************ 00:18:56.237 09:27:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:56.237 09:27:42 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:56.237 09:27:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:56.237 09:27:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:56.237 09:27:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:56.237 ************************************ 00:18:56.237 START TEST nvmf_auth_target 00:18:56.237 ************************************ 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:56.237 * Looking for test storage... 00:18:56.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:56.237 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:56.238 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.238 09:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.238 09:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.238 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:56.238 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:56.238 09:27:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:56.238 09:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:04.378 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:04.378 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:04.378 Found net devices under 0000:31:00.0: cvl_0_0 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:04.378 Found net devices under 0000:31:00.1: cvl_0_1 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:04.378 09:27:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:04.378 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:04.378 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:04.378 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:04.378 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:04.378 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:04.378 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:04.378 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:04.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:04.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:19:04.378 00:19:04.378 --- 10.0.0.2 ping statistics --- 00:19:04.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.378 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:19:04.378 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:04.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:04.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:19:04.379 00:19:04.379 --- 10.0.0.1 ping statistics --- 00:19:04.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.379 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=696444 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 696444 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 696444 ']' 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:04.379 09:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=696647 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=17e8dc151ed798624ce40689fffdf5d2bb5e0ea7c9cbc736 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.GSU 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 17e8dc151ed798624ce40689fffdf5d2bb5e0ea7c9cbc736 0 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 17e8dc151ed798624ce40689fffdf5d2bb5e0ea7c9cbc736 0 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=17e8dc151ed798624ce40689fffdf5d2bb5e0ea7c9cbc736 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.GSU 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.GSU 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.GSU 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a1fed1954310d648a5e438d43d66f42baeff45640930d803e839fc235d275b8f 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.kNg 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a1fed1954310d648a5e438d43d66f42baeff45640930d803e839fc235d275b8f 3 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a1fed1954310d648a5e438d43d66f42baeff45640930d803e839fc235d275b8f 3 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a1fed1954310d648a5e438d43d66f42baeff45640930d803e839fc235d275b8f 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:04.949 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.kNg 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.kNg 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.kNg 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b628fb502e0ad613108b8c89fcd16307 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.cbw 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b628fb502e0ad613108b8c89fcd16307 1 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b628fb502e0ad613108b8c89fcd16307 1 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b628fb502e0ad613108b8c89fcd16307 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.cbw 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.cbw 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.cbw 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=40351b9e28aa68e4b9631dacf779732027d4b7f1f21db944 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Vty 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 40351b9e28aa68e4b9631dacf779732027d4b7f1f21db944 2 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 40351b9e28aa68e4b9631dacf779732027d4b7f1f21db944 2 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=40351b9e28aa68e4b9631dacf779732027d4b7f1f21db944 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Vty 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Vty 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Vty 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d2eb81de32de61b3c21ff79dab350fae76cd6a02590fc902 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.cFh 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d2eb81de32de61b3c21ff79dab350fae76cd6a02590fc902 2 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d2eb81de32de61b3c21ff79dab350fae76cd6a02590fc902 2 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d2eb81de32de61b3c21ff79dab350fae76cd6a02590fc902 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.cFh 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.cFh 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.cFh 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d494e78b2c0a1f3caa1870eafae68c0b 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.4jS 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d494e78b2c0a1f3caa1870eafae68c0b 1 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d494e78b2c0a1f3caa1870eafae68c0b 1 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d494e78b2c0a1f3caa1870eafae68c0b 00:19:05.210 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:05.211 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.4jS 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.4jS 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.4jS 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=075a1dfe321388161e4898cd8bbc49e5294757437b23c08338ca1d24f70accab 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Nfp 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 075a1dfe321388161e4898cd8bbc49e5294757437b23c08338ca1d24f70accab 3 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 075a1dfe321388161e4898cd8bbc49e5294757437b23c08338ca1d24f70accab 3 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=075a1dfe321388161e4898cd8bbc49e5294757437b23c08338ca1d24f70accab 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Nfp 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Nfp 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Nfp 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 696444 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 696444 ']' 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 696647 /var/tmp/host.sock 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 696647 ']' 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:05.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:05.471 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.731 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:05.731 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:05.731 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:05.731 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.731 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.731 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.731 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:05.731 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GSU 00:19:05.731 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.731 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.731 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.731 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.GSU 00:19:05.731 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.GSU 00:19:05.992 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.kNg ]] 00:19:05.992 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kNg 00:19:05.992 09:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.992 09:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.992 09:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.992 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kNg 00:19:05.992 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kNg 00:19:05.992 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:05.992 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.cbw 00:19:05.992 09:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.992 09:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.252 09:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.252 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.cbw 00:19:06.252 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.cbw 00:19:06.252 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Vty ]] 00:19:06.252 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Vty 00:19:06.252 09:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.252 09:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.252 09:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.252 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Vty 00:19:06.252 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Vty 00:19:06.513 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:06.513 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.cFh 00:19:06.513 09:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.513 09:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.513 09:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.513 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.cFh 00:19:06.513 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.cFh 00:19:06.513 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.4jS ]] 00:19:06.513 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4jS 00:19:06.513 09:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.513 09:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.773 09:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.773 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4jS 00:19:06.773 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4jS 00:19:06.773 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:06.773 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Nfp 00:19:06.773 09:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.773 09:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.773 09:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.773 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Nfp 00:19:06.773 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Nfp 00:19:07.034 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:07.034 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:07.034 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.034 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.034 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:07.034 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:07.034 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:07.034 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.034 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:07.034 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:07.034 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.034 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.034 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.034 09:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.034 09:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.034 09:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.034 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.035 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.295 00:19:07.295 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.295 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.295 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.556 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.556 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.556 09:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.556 09:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.556 09:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.556 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.556 { 00:19:07.556 "cntlid": 1, 00:19:07.556 "qid": 0, 00:19:07.556 "state": "enabled", 00:19:07.556 "thread": "nvmf_tgt_poll_group_000", 00:19:07.556 "listen_address": { 00:19:07.556 "trtype": "TCP", 00:19:07.556 "adrfam": "IPv4", 00:19:07.556 "traddr": "10.0.0.2", 00:19:07.556 "trsvcid": "4420" 00:19:07.556 }, 00:19:07.556 "peer_address": { 00:19:07.556 "trtype": "TCP", 00:19:07.556 "adrfam": "IPv4", 00:19:07.556 "traddr": "10.0.0.1", 00:19:07.556 "trsvcid": "39170" 00:19:07.556 }, 00:19:07.556 "auth": { 00:19:07.556 "state": "completed", 00:19:07.556 "digest": "sha256", 00:19:07.556 "dhgroup": "null" 00:19:07.556 } 00:19:07.556 } 00:19:07.556 ]' 00:19:07.556 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.556 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.556 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.556 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:07.556 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.556 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.556 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.556 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.817 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTdlOGRjMTUxZWQ3OTg2MjRjZTQwNjg5ZmZmZGY1ZDJiYjVlMGVhN2M5Y2JjNzM2wwp8yw==: --dhchap-ctrl-secret DHHC-1:03:YTFmZWQxOTU0MzEwZDY0OGE1ZTQzOGQ0M2Q2NmY0MmJhZWZmNDU2NDA5MzBkODAzZTgzOWZjMjM1ZDI3NWI4ZksrVuA=: 00:19:08.394 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.394 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:08.394 09:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.394 09:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.394 09:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.394 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.394 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:08.394 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:08.702 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:08.702 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.702 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:08.702 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:08.702 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:08.703 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.703 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.703 09:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.703 09:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.703 09:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.703 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.703 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.971 00:19:08.971 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.971 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.971 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.971 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.972 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.972 09:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.972 09:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.972 09:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.972 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.972 { 00:19:08.972 "cntlid": 3, 00:19:08.972 "qid": 0, 00:19:08.972 "state": "enabled", 00:19:08.972 "thread": "nvmf_tgt_poll_group_000", 00:19:08.972 "listen_address": { 00:19:08.972 "trtype": "TCP", 00:19:08.972 "adrfam": "IPv4", 00:19:08.972 "traddr": "10.0.0.2", 00:19:08.972 "trsvcid": "4420" 00:19:08.972 }, 00:19:08.972 "peer_address": { 00:19:08.972 "trtype": "TCP", 00:19:08.972 "adrfam": "IPv4", 00:19:08.972 "traddr": "10.0.0.1", 00:19:08.972 "trsvcid": "39198" 00:19:08.972 }, 00:19:08.972 "auth": { 00:19:08.972 "state": "completed", 00:19:08.972 "digest": "sha256", 00:19:08.972 "dhgroup": "null" 00:19:08.972 } 00:19:08.972 } 00:19:08.972 ]' 00:19:08.972 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.972 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.972 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.231 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:09.231 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.231 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.231 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.231 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.231 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjYyOGZiNTAyZTBhZDYxMzEwOGI4Yzg5ZmNkMTYzMDfXT6l9: --dhchap-ctrl-secret DHHC-1:02:NDAzNTFiOWUyOGFhNjhlNGI5NjMxZGFjZjc3OTczMjAyN2Q0YjdmMWYyMWRiOTQ0QIEdSw==: 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.174 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.436 00:19:10.436 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.436 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.436 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.698 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.698 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.698 09:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.698 09:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.698 09:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.698 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.698 { 00:19:10.698 "cntlid": 5, 00:19:10.698 "qid": 0, 00:19:10.698 "state": "enabled", 00:19:10.698 "thread": "nvmf_tgt_poll_group_000", 00:19:10.698 "listen_address": { 00:19:10.698 "trtype": "TCP", 00:19:10.698 "adrfam": "IPv4", 00:19:10.698 "traddr": "10.0.0.2", 00:19:10.698 "trsvcid": "4420" 00:19:10.698 }, 00:19:10.698 "peer_address": { 00:19:10.698 "trtype": "TCP", 00:19:10.698 "adrfam": "IPv4", 00:19:10.698 "traddr": "10.0.0.1", 00:19:10.698 "trsvcid": "39218" 00:19:10.698 }, 00:19:10.698 "auth": { 00:19:10.698 "state": "completed", 00:19:10.698 "digest": "sha256", 00:19:10.698 "dhgroup": "null" 00:19:10.698 } 00:19:10.698 } 00:19:10.698 ]' 00:19:10.698 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.698 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.698 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.698 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:10.698 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.698 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.698 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.698 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.959 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZDJlYjgxZGUzMmRlNjFiM2MyMWZmNzlkYWIzNTBmYWU3NmNkNmEwMjU5MGZjOTAyb6nGcQ==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5NGU3OGIyYzBhMWYzY2FhMTg3MGVhZmFlNjhjMGINfRK5: 00:19:11.530 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.530 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:11.530 09:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.530 09:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.530 09:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.530 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.530 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:11.530 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:11.790 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:11.790 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.790 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:11.790 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:11.790 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:11.790 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.790 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:11.790 09:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.790 09:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.790 09:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.790 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.790 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.050 00:19:12.050 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.050 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.050 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.050 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.050 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.050 09:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.050 09:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.050 09:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.050 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.050 { 00:19:12.050 "cntlid": 7, 00:19:12.050 "qid": 0, 00:19:12.050 "state": "enabled", 00:19:12.050 "thread": "nvmf_tgt_poll_group_000", 00:19:12.050 "listen_address": { 00:19:12.050 "trtype": "TCP", 00:19:12.050 "adrfam": "IPv4", 00:19:12.050 "traddr": "10.0.0.2", 00:19:12.050 "trsvcid": "4420" 00:19:12.050 }, 00:19:12.050 "peer_address": { 00:19:12.050 "trtype": "TCP", 00:19:12.050 "adrfam": "IPv4", 00:19:12.050 "traddr": "10.0.0.1", 00:19:12.050 "trsvcid": "39250" 00:19:12.050 }, 00:19:12.050 "auth": { 00:19:12.050 "state": "completed", 00:19:12.050 "digest": "sha256", 00:19:12.050 "dhgroup": "null" 00:19:12.050 } 00:19:12.050 } 00:19:12.050 ]' 00:19:12.051 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.311 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.311 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.311 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:12.311 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.311 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.311 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.311 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.571 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:MDc1YTFkZmUzMjEzODgxNjFlNDg5OGNkOGJiYzQ5ZTUyOTQ3NTc0MzdiMjNjMDgzMzhjYTFkMjRmNzBhY2NhYq3ofEE=: 00:19:13.141 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.141 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:13.141 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.141 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.141 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.141 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.141 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.141 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.141 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.401 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:13.401 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.401 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.401 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:13.401 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:13.401 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.401 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.401 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.401 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.401 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.401 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.401 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.401 00:19:13.661 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.661 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.661 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.661 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.661 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.661 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.661 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.661 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.661 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.661 { 00:19:13.661 "cntlid": 9, 00:19:13.661 "qid": 0, 00:19:13.661 "state": "enabled", 00:19:13.661 "thread": "nvmf_tgt_poll_group_000", 00:19:13.661 "listen_address": { 00:19:13.662 "trtype": "TCP", 00:19:13.662 "adrfam": "IPv4", 00:19:13.662 "traddr": "10.0.0.2", 00:19:13.662 "trsvcid": "4420" 00:19:13.662 }, 00:19:13.662 "peer_address": { 00:19:13.662 "trtype": "TCP", 00:19:13.662 "adrfam": "IPv4", 00:19:13.662 "traddr": "10.0.0.1", 00:19:13.662 "trsvcid": "39274" 00:19:13.662 }, 00:19:13.662 "auth": { 00:19:13.662 "state": "completed", 00:19:13.662 "digest": "sha256", 00:19:13.662 "dhgroup": "ffdhe2048" 00:19:13.662 } 00:19:13.662 } 00:19:13.662 ]' 00:19:13.662 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.662 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.662 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.922 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.922 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.922 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.922 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.922 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.922 09:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTdlOGRjMTUxZWQ3OTg2MjRjZTQwNjg5ZmZmZGY1ZDJiYjVlMGVhN2M5Y2JjNzM2wwp8yw==: --dhchap-ctrl-secret DHHC-1:03:YTFmZWQxOTU0MzEwZDY0OGE1ZTQzOGQ0M2Q2NmY0MmJhZWZmNDU2NDA5MzBkODAzZTgzOWZjMjM1ZDI3NWI4ZksrVuA=: 00:19:14.492 09:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.492 09:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:14.492 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.492 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.752 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.752 09:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.752 09:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.752 09:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.752 09:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:14.752 09:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.752 09:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:14.752 09:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:14.752 09:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:14.752 09:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.752 09:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.752 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.752 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.752 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.752 09:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.752 09:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.013 00:19:15.013 09:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.013 09:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.013 09:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.274 09:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.274 09:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.274 09:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.274 09:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.274 09:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.274 09:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.274 { 00:19:15.274 "cntlid": 11, 00:19:15.274 "qid": 0, 00:19:15.274 "state": "enabled", 00:19:15.274 "thread": "nvmf_tgt_poll_group_000", 00:19:15.274 "listen_address": { 00:19:15.274 "trtype": "TCP", 00:19:15.274 "adrfam": "IPv4", 00:19:15.274 "traddr": "10.0.0.2", 00:19:15.274 "trsvcid": "4420" 00:19:15.274 }, 00:19:15.274 "peer_address": { 00:19:15.274 "trtype": "TCP", 00:19:15.274 "adrfam": "IPv4", 00:19:15.274 "traddr": "10.0.0.1", 00:19:15.274 "trsvcid": "39304" 00:19:15.274 }, 00:19:15.274 "auth": { 00:19:15.274 "state": "completed", 00:19:15.274 "digest": "sha256", 00:19:15.274 "dhgroup": "ffdhe2048" 00:19:15.274 } 00:19:15.274 } 00:19:15.274 ]' 00:19:15.274 09:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.274 09:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.274 09:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.274 09:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:15.274 09:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.274 09:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.274 09:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.274 09:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.535 09:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjYyOGZiNTAyZTBhZDYxMzEwOGI4Yzg5ZmNkMTYzMDfXT6l9: --dhchap-ctrl-secret DHHC-1:02:NDAzNTFiOWUyOGFhNjhlNGI5NjMxZGFjZjc3OTczMjAyN2Q0YjdmMWYyMWRiOTQ0QIEdSw==: 00:19:16.106 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.106 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:16.106 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.106 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.106 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.106 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.106 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:16.106 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:16.367 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:16.367 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.367 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:16.367 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:16.367 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:16.367 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.367 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.367 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.367 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.367 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.367 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.367 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.367 00:19:16.629 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.629 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.629 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.629 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.629 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.629 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.629 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.629 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.629 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.629 { 00:19:16.629 "cntlid": 13, 00:19:16.629 "qid": 0, 00:19:16.629 "state": "enabled", 00:19:16.629 "thread": "nvmf_tgt_poll_group_000", 00:19:16.629 "listen_address": { 00:19:16.629 "trtype": "TCP", 00:19:16.629 "adrfam": "IPv4", 00:19:16.629 "traddr": "10.0.0.2", 00:19:16.629 "trsvcid": "4420" 00:19:16.629 }, 00:19:16.629 "peer_address": { 00:19:16.629 "trtype": "TCP", 00:19:16.629 "adrfam": "IPv4", 00:19:16.629 "traddr": "10.0.0.1", 00:19:16.629 "trsvcid": "46772" 00:19:16.629 }, 00:19:16.629 "auth": { 00:19:16.629 "state": "completed", 00:19:16.629 "digest": "sha256", 00:19:16.629 "dhgroup": "ffdhe2048" 00:19:16.629 } 00:19:16.629 } 00:19:16.629 ]' 00:19:16.629 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.629 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.629 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.890 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:16.890 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.890 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.890 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.890 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.890 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZDJlYjgxZGUzMmRlNjFiM2MyMWZmNzlkYWIzNTBmYWU3NmNkNmEwMjU5MGZjOTAyb6nGcQ==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5NGU3OGIyYzBhMWYzY2FhMTg3MGVhZmFlNjhjMGINfRK5: 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.830 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.091 00:19:18.091 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.091 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.091 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.353 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.353 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.353 09:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.353 09:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.353 09:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.353 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.353 { 00:19:18.353 "cntlid": 15, 00:19:18.353 "qid": 0, 00:19:18.353 "state": "enabled", 00:19:18.353 "thread": "nvmf_tgt_poll_group_000", 00:19:18.353 "listen_address": { 00:19:18.353 "trtype": "TCP", 00:19:18.353 "adrfam": "IPv4", 00:19:18.353 "traddr": "10.0.0.2", 00:19:18.353 "trsvcid": "4420" 00:19:18.353 }, 00:19:18.353 "peer_address": { 00:19:18.353 "trtype": "TCP", 00:19:18.353 "adrfam": "IPv4", 00:19:18.353 "traddr": "10.0.0.1", 00:19:18.353 "trsvcid": "46780" 00:19:18.353 }, 00:19:18.353 "auth": { 00:19:18.353 "state": "completed", 00:19:18.353 "digest": "sha256", 00:19:18.353 "dhgroup": "ffdhe2048" 00:19:18.353 } 00:19:18.353 } 00:19:18.353 ]' 00:19:18.353 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.353 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.353 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.353 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:18.353 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.353 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.353 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.353 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.614 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:MDc1YTFkZmUzMjEzODgxNjFlNDg5OGNkOGJiYzQ5ZTUyOTQ3NTc0MzdiMjNjMDgzMzhjYTFkMjRmNzBhY2NhYq3ofEE=: 00:19:19.186 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.186 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:19.186 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.186 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.186 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.186 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.186 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.186 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:19.186 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:19.448 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:19.448 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.448 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:19.448 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:19.448 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:19.448 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.448 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.448 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.448 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.448 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.448 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.448 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.709 00:19:19.709 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.709 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.709 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.971 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.971 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.971 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.971 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.971 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.971 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.971 { 00:19:19.971 "cntlid": 17, 00:19:19.971 "qid": 0, 00:19:19.971 "state": "enabled", 00:19:19.971 "thread": "nvmf_tgt_poll_group_000", 00:19:19.971 "listen_address": { 00:19:19.971 "trtype": "TCP", 00:19:19.971 "adrfam": "IPv4", 00:19:19.971 "traddr": "10.0.0.2", 00:19:19.971 "trsvcid": "4420" 00:19:19.971 }, 00:19:19.971 "peer_address": { 00:19:19.971 "trtype": "TCP", 00:19:19.971 "adrfam": "IPv4", 00:19:19.971 "traddr": "10.0.0.1", 00:19:19.971 "trsvcid": "46804" 00:19:19.971 }, 00:19:19.971 "auth": { 00:19:19.971 "state": "completed", 00:19:19.971 "digest": "sha256", 00:19:19.971 "dhgroup": "ffdhe3072" 00:19:19.971 } 00:19:19.971 } 00:19:19.971 ]' 00:19:19.971 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.971 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.971 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.971 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:19.971 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.971 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.971 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.971 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.231 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTdlOGRjMTUxZWQ3OTg2MjRjZTQwNjg5ZmZmZGY1ZDJiYjVlMGVhN2M5Y2JjNzM2wwp8yw==: --dhchap-ctrl-secret DHHC-1:03:YTFmZWQxOTU0MzEwZDY0OGE1ZTQzOGQ0M2Q2NmY0MmJhZWZmNDU2NDA5MzBkODAzZTgzOWZjMjM1ZDI3NWI4ZksrVuA=: 00:19:20.801 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.801 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:20.802 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.802 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.802 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.802 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.802 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:20.802 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:21.085 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:21.085 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.085 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:21.085 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:21.085 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:21.085 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.085 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.085 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.085 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.085 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.085 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.085 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.347 00:19:21.347 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.347 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.347 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.607 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.607 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.607 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.607 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.607 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.607 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.607 { 00:19:21.607 "cntlid": 19, 00:19:21.607 "qid": 0, 00:19:21.607 "state": "enabled", 00:19:21.607 "thread": "nvmf_tgt_poll_group_000", 00:19:21.607 "listen_address": { 00:19:21.607 "trtype": "TCP", 00:19:21.607 "adrfam": "IPv4", 00:19:21.607 "traddr": "10.0.0.2", 00:19:21.607 "trsvcid": "4420" 00:19:21.607 }, 00:19:21.607 "peer_address": { 00:19:21.607 "trtype": "TCP", 00:19:21.607 "adrfam": "IPv4", 00:19:21.607 "traddr": "10.0.0.1", 00:19:21.607 "trsvcid": "46828" 00:19:21.607 }, 00:19:21.607 "auth": { 00:19:21.607 "state": "completed", 00:19:21.607 "digest": "sha256", 00:19:21.607 "dhgroup": "ffdhe3072" 00:19:21.608 } 00:19:21.608 } 00:19:21.608 ]' 00:19:21.608 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.608 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.608 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.608 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:21.608 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.608 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.608 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.608 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.868 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjYyOGZiNTAyZTBhZDYxMzEwOGI4Yzg5ZmNkMTYzMDfXT6l9: --dhchap-ctrl-secret DHHC-1:02:NDAzNTFiOWUyOGFhNjhlNGI5NjMxZGFjZjc3OTczMjAyN2Q0YjdmMWYyMWRiOTQ0QIEdSw==: 00:19:22.440 09:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.440 09:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:22.440 09:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.440 09:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.440 09:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.440 09:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.440 09:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:22.440 09:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:22.702 09:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:22.702 09:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.702 09:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.702 09:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:22.702 09:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:22.702 09:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.702 09:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.702 09:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.702 09:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.702 09:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.702 09:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.702 09:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.964 00:19:22.964 09:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.964 09:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.964 09:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.964 09:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.964 09:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.964 09:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.964 09:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.224 09:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.224 09:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.224 { 00:19:23.224 "cntlid": 21, 00:19:23.224 "qid": 0, 00:19:23.224 "state": "enabled", 00:19:23.224 "thread": "nvmf_tgt_poll_group_000", 00:19:23.224 "listen_address": { 00:19:23.224 "trtype": "TCP", 00:19:23.224 "adrfam": "IPv4", 00:19:23.224 "traddr": "10.0.0.2", 00:19:23.224 "trsvcid": "4420" 00:19:23.224 }, 00:19:23.224 "peer_address": { 00:19:23.224 "trtype": "TCP", 00:19:23.224 "adrfam": "IPv4", 00:19:23.224 "traddr": "10.0.0.1", 00:19:23.224 "trsvcid": "46846" 00:19:23.224 }, 00:19:23.224 "auth": { 00:19:23.224 "state": "completed", 00:19:23.224 "digest": "sha256", 00:19:23.224 "dhgroup": "ffdhe3072" 00:19:23.224 } 00:19:23.224 } 00:19:23.224 ]' 00:19:23.224 09:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.224 09:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.224 09:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.224 09:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:23.224 09:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.224 09:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.224 09:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.224 09:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.484 09:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZDJlYjgxZGUzMmRlNjFiM2MyMWZmNzlkYWIzNTBmYWU3NmNkNmEwMjU5MGZjOTAyb6nGcQ==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5NGU3OGIyYzBhMWYzY2FhMTg3MGVhZmFlNjhjMGINfRK5: 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.057 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.319 00:19:24.319 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.319 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.319 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.580 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.580 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.580 09:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.580 09:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.580 09:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.580 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.580 { 00:19:24.580 "cntlid": 23, 00:19:24.580 "qid": 0, 00:19:24.580 "state": "enabled", 00:19:24.580 "thread": "nvmf_tgt_poll_group_000", 00:19:24.580 "listen_address": { 00:19:24.580 "trtype": "TCP", 00:19:24.580 "adrfam": "IPv4", 00:19:24.580 "traddr": "10.0.0.2", 00:19:24.580 "trsvcid": "4420" 00:19:24.580 }, 00:19:24.580 "peer_address": { 00:19:24.580 "trtype": "TCP", 00:19:24.580 "adrfam": "IPv4", 00:19:24.580 "traddr": "10.0.0.1", 00:19:24.580 "trsvcid": "46882" 00:19:24.580 }, 00:19:24.580 "auth": { 00:19:24.580 "state": "completed", 00:19:24.580 "digest": "sha256", 00:19:24.580 "dhgroup": "ffdhe3072" 00:19:24.580 } 00:19:24.580 } 00:19:24.580 ]' 00:19:24.580 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.581 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.581 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.581 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:24.581 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.581 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.581 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.581 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.842 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:MDc1YTFkZmUzMjEzODgxNjFlNDg5OGNkOGJiYzQ5ZTUyOTQ3NTc0MzdiMjNjMDgzMzhjYTFkMjRmNzBhY2NhYq3ofEE=: 00:19:25.414 09:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.414 09:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:25.414 09:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.414 09:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.414 09:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.414 09:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.414 09:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.414 09:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:25.414 09:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:25.675 09:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:25.675 09:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.675 09:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:25.675 09:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:25.675 09:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:25.675 09:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.675 09:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.675 09:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.675 09:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.675 09:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.675 09:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.675 09:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.937 00:19:25.937 09:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.937 09:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.937 09:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.197 09:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.197 09:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.197 09:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.197 09:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.197 09:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.197 09:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.197 { 00:19:26.197 "cntlid": 25, 00:19:26.197 "qid": 0, 00:19:26.197 "state": "enabled", 00:19:26.197 "thread": "nvmf_tgt_poll_group_000", 00:19:26.197 "listen_address": { 00:19:26.197 "trtype": "TCP", 00:19:26.197 "adrfam": "IPv4", 00:19:26.197 "traddr": "10.0.0.2", 00:19:26.197 "trsvcid": "4420" 00:19:26.197 }, 00:19:26.197 "peer_address": { 00:19:26.197 "trtype": "TCP", 00:19:26.197 "adrfam": "IPv4", 00:19:26.197 "traddr": "10.0.0.1", 00:19:26.197 "trsvcid": "46902" 00:19:26.197 }, 00:19:26.197 "auth": { 00:19:26.197 "state": "completed", 00:19:26.197 "digest": "sha256", 00:19:26.197 "dhgroup": "ffdhe4096" 00:19:26.197 } 00:19:26.197 } 00:19:26.197 ]' 00:19:26.197 09:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.197 09:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.197 09:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.197 09:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.197 09:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.197 09:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.197 09:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.197 09:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.458 09:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTdlOGRjMTUxZWQ3OTg2MjRjZTQwNjg5ZmZmZGY1ZDJiYjVlMGVhN2M5Y2JjNzM2wwp8yw==: --dhchap-ctrl-secret DHHC-1:03:YTFmZWQxOTU0MzEwZDY0OGE1ZTQzOGQ0M2Q2NmY0MmJhZWZmNDU2NDA5MzBkODAzZTgzOWZjMjM1ZDI3NWI4ZksrVuA=: 00:19:27.060 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.060 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:27.060 09:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.060 09:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.060 09:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.060 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.060 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:27.060 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:27.321 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:27.321 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.321 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:27.321 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:27.321 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:27.321 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.321 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.321 09:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.321 09:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.321 09:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.321 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.321 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.582 00:19:27.582 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.582 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.582 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.843 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.843 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.843 09:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.843 09:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.843 09:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.843 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.843 { 00:19:27.843 "cntlid": 27, 00:19:27.843 "qid": 0, 00:19:27.843 "state": "enabled", 00:19:27.843 "thread": "nvmf_tgt_poll_group_000", 00:19:27.843 "listen_address": { 00:19:27.843 "trtype": "TCP", 00:19:27.843 "adrfam": "IPv4", 00:19:27.843 "traddr": "10.0.0.2", 00:19:27.843 "trsvcid": "4420" 00:19:27.843 }, 00:19:27.843 "peer_address": { 00:19:27.843 "trtype": "TCP", 00:19:27.843 "adrfam": "IPv4", 00:19:27.843 "traddr": "10.0.0.1", 00:19:27.843 "trsvcid": "34880" 00:19:27.843 }, 00:19:27.843 "auth": { 00:19:27.843 "state": "completed", 00:19:27.843 "digest": "sha256", 00:19:27.843 "dhgroup": "ffdhe4096" 00:19:27.843 } 00:19:27.843 } 00:19:27.843 ]' 00:19:27.843 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.843 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.843 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.843 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.843 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.843 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.844 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.844 09:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.104 09:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjYyOGZiNTAyZTBhZDYxMzEwOGI4Yzg5ZmNkMTYzMDfXT6l9: --dhchap-ctrl-secret DHHC-1:02:NDAzNTFiOWUyOGFhNjhlNGI5NjMxZGFjZjc3OTczMjAyN2Q0YjdmMWYyMWRiOTQ0QIEdSw==: 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.677 09:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.938 00:19:28.938 09:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.938 09:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.938 09:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.200 09:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.200 09:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.200 09:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.200 09:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.200 09:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.200 09:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.200 { 00:19:29.200 "cntlid": 29, 00:19:29.200 "qid": 0, 00:19:29.200 "state": "enabled", 00:19:29.200 "thread": "nvmf_tgt_poll_group_000", 00:19:29.200 "listen_address": { 00:19:29.200 "trtype": "TCP", 00:19:29.200 "adrfam": "IPv4", 00:19:29.200 "traddr": "10.0.0.2", 00:19:29.200 "trsvcid": "4420" 00:19:29.200 }, 00:19:29.200 "peer_address": { 00:19:29.200 "trtype": "TCP", 00:19:29.200 "adrfam": "IPv4", 00:19:29.200 "traddr": "10.0.0.1", 00:19:29.200 "trsvcid": "34914" 00:19:29.200 }, 00:19:29.200 "auth": { 00:19:29.200 "state": "completed", 00:19:29.200 "digest": "sha256", 00:19:29.200 "dhgroup": "ffdhe4096" 00:19:29.200 } 00:19:29.200 } 00:19:29.200 ]' 00:19:29.200 09:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.200 09:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.200 09:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.200 09:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.200 09:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.461 09:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.461 09:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.461 09:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.461 09:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZDJlYjgxZGUzMmRlNjFiM2MyMWZmNzlkYWIzNTBmYWU3NmNkNmEwMjU5MGZjOTAyb6nGcQ==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5NGU3OGIyYzBhMWYzY2FhMTg3MGVhZmFlNjhjMGINfRK5: 00:19:30.034 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.034 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:30.034 09:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.034 09:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.034 09:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.034 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.034 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.034 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.295 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:30.295 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.295 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:30.295 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:30.295 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.295 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.295 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:30.295 09:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.295 09:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.295 09:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.295 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.295 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.556 00:19:30.556 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.556 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.556 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.817 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.817 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.817 09:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.817 09:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.817 09:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.817 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.817 { 00:19:30.817 "cntlid": 31, 00:19:30.818 "qid": 0, 00:19:30.818 "state": "enabled", 00:19:30.818 "thread": "nvmf_tgt_poll_group_000", 00:19:30.818 "listen_address": { 00:19:30.818 "trtype": "TCP", 00:19:30.818 "adrfam": "IPv4", 00:19:30.818 "traddr": "10.0.0.2", 00:19:30.818 "trsvcid": "4420" 00:19:30.818 }, 00:19:30.818 "peer_address": { 00:19:30.818 "trtype": "TCP", 00:19:30.818 "adrfam": "IPv4", 00:19:30.818 "traddr": "10.0.0.1", 00:19:30.818 "trsvcid": "34938" 00:19:30.818 }, 00:19:30.818 "auth": { 00:19:30.818 "state": "completed", 00:19:30.818 "digest": "sha256", 00:19:30.818 "dhgroup": "ffdhe4096" 00:19:30.818 } 00:19:30.818 } 00:19:30.818 ]' 00:19:30.818 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.818 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.818 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.818 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:30.818 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.818 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.818 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.818 09:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.078 09:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:MDc1YTFkZmUzMjEzODgxNjFlNDg5OGNkOGJiYzQ5ZTUyOTQ3NTc0MzdiMjNjMDgzMzhjYTFkMjRmNzBhY2NhYq3ofEE=: 00:19:31.646 09:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.646 09:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:31.646 09:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.646 09:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.646 09:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.646 09:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.646 09:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.646 09:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:31.646 09:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:31.906 09:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:31.906 09:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.906 09:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.906 09:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:31.906 09:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:31.906 09:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.906 09:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.906 09:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.906 09:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.906 09:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.906 09:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.906 09:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.167 00:19:32.167 09:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.167 09:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.167 09:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.427 09:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.427 09:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.427 09:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.427 09:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.427 09:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.427 09:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.427 { 00:19:32.427 "cntlid": 33, 00:19:32.427 "qid": 0, 00:19:32.427 "state": "enabled", 00:19:32.427 "thread": "nvmf_tgt_poll_group_000", 00:19:32.427 "listen_address": { 00:19:32.427 "trtype": "TCP", 00:19:32.427 "adrfam": "IPv4", 00:19:32.427 "traddr": "10.0.0.2", 00:19:32.427 "trsvcid": "4420" 00:19:32.427 }, 00:19:32.427 "peer_address": { 00:19:32.427 "trtype": "TCP", 00:19:32.427 "adrfam": "IPv4", 00:19:32.427 "traddr": "10.0.0.1", 00:19:32.427 "trsvcid": "34950" 00:19:32.427 }, 00:19:32.427 "auth": { 00:19:32.427 "state": "completed", 00:19:32.427 "digest": "sha256", 00:19:32.427 "dhgroup": "ffdhe6144" 00:19:32.427 } 00:19:32.427 } 00:19:32.427 ]' 00:19:32.427 09:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.427 09:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.427 09:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.427 09:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:32.427 09:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.427 09:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.427 09:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.427 09:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.687 09:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTdlOGRjMTUxZWQ3OTg2MjRjZTQwNjg5ZmZmZGY1ZDJiYjVlMGVhN2M5Y2JjNzM2wwp8yw==: --dhchap-ctrl-secret DHHC-1:03:YTFmZWQxOTU0MzEwZDY0OGE1ZTQzOGQ0M2Q2NmY0MmJhZWZmNDU2NDA5MzBkODAzZTgzOWZjMjM1ZDI3NWI4ZksrVuA=: 00:19:33.258 09:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.258 09:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:33.258 09:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.258 09:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.258 09:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.258 09:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.258 09:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:33.258 09:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:33.519 09:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:33.519 09:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.519 09:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:33.519 09:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:33.519 09:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:33.519 09:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.520 09:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.520 09:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.520 09:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.520 09:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.520 09:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.520 09:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.780 00:19:33.780 09:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.780 09:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.780 09:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.040 09:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.040 09:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.040 09:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.040 09:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.040 09:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.040 09:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.040 { 00:19:34.040 "cntlid": 35, 00:19:34.040 "qid": 0, 00:19:34.040 "state": "enabled", 00:19:34.040 "thread": "nvmf_tgt_poll_group_000", 00:19:34.040 "listen_address": { 00:19:34.040 "trtype": "TCP", 00:19:34.040 "adrfam": "IPv4", 00:19:34.040 "traddr": "10.0.0.2", 00:19:34.040 "trsvcid": "4420" 00:19:34.040 }, 00:19:34.040 "peer_address": { 00:19:34.040 "trtype": "TCP", 00:19:34.040 "adrfam": "IPv4", 00:19:34.040 "traddr": "10.0.0.1", 00:19:34.040 "trsvcid": "34996" 00:19:34.040 }, 00:19:34.040 "auth": { 00:19:34.040 "state": "completed", 00:19:34.040 "digest": "sha256", 00:19:34.040 "dhgroup": "ffdhe6144" 00:19:34.040 } 00:19:34.040 } 00:19:34.040 ]' 00:19:34.040 09:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.040 09:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.040 09:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.040 09:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:34.040 09:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.040 09:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.040 09:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.040 09:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.301 09:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjYyOGZiNTAyZTBhZDYxMzEwOGI4Yzg5ZmNkMTYzMDfXT6l9: --dhchap-ctrl-secret DHHC-1:02:NDAzNTFiOWUyOGFhNjhlNGI5NjMxZGFjZjc3OTczMjAyN2Q0YjdmMWYyMWRiOTQ0QIEdSw==: 00:19:34.871 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.871 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:34.871 09:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.871 09:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.871 09:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.871 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.871 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.871 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:35.131 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:35.131 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.131 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.131 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:35.131 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:35.131 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.131 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.131 09:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.131 09:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.131 09:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.132 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.132 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.391 00:19:35.391 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.391 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.391 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.649 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.649 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.649 09:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.649 09:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.649 09:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.649 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.649 { 00:19:35.649 "cntlid": 37, 00:19:35.649 "qid": 0, 00:19:35.650 "state": "enabled", 00:19:35.650 "thread": "nvmf_tgt_poll_group_000", 00:19:35.650 "listen_address": { 00:19:35.650 "trtype": "TCP", 00:19:35.650 "adrfam": "IPv4", 00:19:35.650 "traddr": "10.0.0.2", 00:19:35.650 "trsvcid": "4420" 00:19:35.650 }, 00:19:35.650 "peer_address": { 00:19:35.650 "trtype": "TCP", 00:19:35.650 "adrfam": "IPv4", 00:19:35.650 "traddr": "10.0.0.1", 00:19:35.650 "trsvcid": "35018" 00:19:35.650 }, 00:19:35.650 "auth": { 00:19:35.650 "state": "completed", 00:19:35.650 "digest": "sha256", 00:19:35.650 "dhgroup": "ffdhe6144" 00:19:35.650 } 00:19:35.650 } 00:19:35.650 ]' 00:19:35.650 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.650 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.650 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.650 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.650 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.909 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.909 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.909 09:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.909 09:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZDJlYjgxZGUzMmRlNjFiM2MyMWZmNzlkYWIzNTBmYWU3NmNkNmEwMjU5MGZjOTAyb6nGcQ==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5NGU3OGIyYzBhMWYzY2FhMTg3MGVhZmFlNjhjMGINfRK5: 00:19:36.476 09:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.476 09:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:36.476 09:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.476 09:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.476 09:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.476 09:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.476 09:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:36.476 09:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:36.735 09:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:36.735 09:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.735 09:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:36.735 09:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:36.735 09:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:36.735 09:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.735 09:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:36.735 09:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.735 09:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.735 09:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.735 09:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.735 09:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.993 00:19:36.993 09:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.993 09:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.993 09:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.253 09:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.253 09:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.253 09:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.253 09:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.253 09:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.253 09:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.253 { 00:19:37.253 "cntlid": 39, 00:19:37.253 "qid": 0, 00:19:37.253 "state": "enabled", 00:19:37.253 "thread": "nvmf_tgt_poll_group_000", 00:19:37.253 "listen_address": { 00:19:37.253 "trtype": "TCP", 00:19:37.253 "adrfam": "IPv4", 00:19:37.253 "traddr": "10.0.0.2", 00:19:37.253 "trsvcid": "4420" 00:19:37.253 }, 00:19:37.253 "peer_address": { 00:19:37.253 "trtype": "TCP", 00:19:37.253 "adrfam": "IPv4", 00:19:37.253 "traddr": "10.0.0.1", 00:19:37.253 "trsvcid": "45194" 00:19:37.253 }, 00:19:37.253 "auth": { 00:19:37.253 "state": "completed", 00:19:37.253 "digest": "sha256", 00:19:37.253 "dhgroup": "ffdhe6144" 00:19:37.253 } 00:19:37.253 } 00:19:37.253 ]' 00:19:37.253 09:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.253 09:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.253 09:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.253 09:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:37.253 09:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.513 09:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.513 09:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.513 09:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.513 09:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:MDc1YTFkZmUzMjEzODgxNjFlNDg5OGNkOGJiYzQ5ZTUyOTQ3NTc0MzdiMjNjMDgzMzhjYTFkMjRmNzBhY2NhYq3ofEE=: 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.454 09:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.024 00:19:39.024 09:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.024 09:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.024 09:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.024 09:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.024 09:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.024 09:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.290 09:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.290 09:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.290 09:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.290 { 00:19:39.290 "cntlid": 41, 00:19:39.290 "qid": 0, 00:19:39.290 "state": "enabled", 00:19:39.290 "thread": "nvmf_tgt_poll_group_000", 00:19:39.290 "listen_address": { 00:19:39.290 "trtype": "TCP", 00:19:39.290 "adrfam": "IPv4", 00:19:39.290 "traddr": "10.0.0.2", 00:19:39.290 "trsvcid": "4420" 00:19:39.290 }, 00:19:39.290 "peer_address": { 00:19:39.290 "trtype": "TCP", 00:19:39.290 "adrfam": "IPv4", 00:19:39.290 "traddr": "10.0.0.1", 00:19:39.290 "trsvcid": "45222" 00:19:39.290 }, 00:19:39.290 "auth": { 00:19:39.290 "state": "completed", 00:19:39.290 "digest": "sha256", 00:19:39.290 "dhgroup": "ffdhe8192" 00:19:39.290 } 00:19:39.290 } 00:19:39.290 ]' 00:19:39.290 09:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.290 09:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.290 09:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.290 09:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:39.290 09:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.290 09:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.290 09:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.290 09:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.551 09:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTdlOGRjMTUxZWQ3OTg2MjRjZTQwNjg5ZmZmZGY1ZDJiYjVlMGVhN2M5Y2JjNzM2wwp8yw==: --dhchap-ctrl-secret DHHC-1:03:YTFmZWQxOTU0MzEwZDY0OGE1ZTQzOGQ0M2Q2NmY0MmJhZWZmNDU2NDA5MzBkODAzZTgzOWZjMjM1ZDI3NWI4ZksrVuA=: 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.121 09:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.691 00:19:40.692 09:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.692 09:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.692 09:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.952 09:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.952 09:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.952 09:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.952 09:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.952 09:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.952 09:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.952 { 00:19:40.952 "cntlid": 43, 00:19:40.952 "qid": 0, 00:19:40.952 "state": "enabled", 00:19:40.952 "thread": "nvmf_tgt_poll_group_000", 00:19:40.952 "listen_address": { 00:19:40.952 "trtype": "TCP", 00:19:40.952 "adrfam": "IPv4", 00:19:40.952 "traddr": "10.0.0.2", 00:19:40.952 "trsvcid": "4420" 00:19:40.952 }, 00:19:40.952 "peer_address": { 00:19:40.952 "trtype": "TCP", 00:19:40.952 "adrfam": "IPv4", 00:19:40.952 "traddr": "10.0.0.1", 00:19:40.952 "trsvcid": "45262" 00:19:40.952 }, 00:19:40.952 "auth": { 00:19:40.952 "state": "completed", 00:19:40.952 "digest": "sha256", 00:19:40.952 "dhgroup": "ffdhe8192" 00:19:40.952 } 00:19:40.952 } 00:19:40.952 ]' 00:19:40.952 09:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.952 09:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.952 09:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.952 09:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:40.952 09:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.212 09:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.212 09:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.212 09:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.212 09:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjYyOGZiNTAyZTBhZDYxMzEwOGI4Yzg5ZmNkMTYzMDfXT6l9: --dhchap-ctrl-secret DHHC-1:02:NDAzNTFiOWUyOGFhNjhlNGI5NjMxZGFjZjc3OTczMjAyN2Q0YjdmMWYyMWRiOTQ0QIEdSw==: 00:19:42.155 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.155 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:42.155 09:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.155 09:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.155 09:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.155 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.155 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:42.155 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:42.155 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:42.155 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.155 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.155 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:42.155 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:42.155 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.156 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.156 09:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.156 09:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.156 09:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.156 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.156 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.727 00:19:42.727 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.727 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.727 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.727 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.727 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.727 09:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.728 09:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.728 09:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.728 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.728 { 00:19:42.728 "cntlid": 45, 00:19:42.728 "qid": 0, 00:19:42.728 "state": "enabled", 00:19:42.728 "thread": "nvmf_tgt_poll_group_000", 00:19:42.728 "listen_address": { 00:19:42.728 "trtype": "TCP", 00:19:42.728 "adrfam": "IPv4", 00:19:42.728 "traddr": "10.0.0.2", 00:19:42.728 "trsvcid": "4420" 00:19:42.728 }, 00:19:42.728 "peer_address": { 00:19:42.728 "trtype": "TCP", 00:19:42.728 "adrfam": "IPv4", 00:19:42.728 "traddr": "10.0.0.1", 00:19:42.728 "trsvcid": "45304" 00:19:42.728 }, 00:19:42.728 "auth": { 00:19:42.728 "state": "completed", 00:19:42.728 "digest": "sha256", 00:19:42.728 "dhgroup": "ffdhe8192" 00:19:42.728 } 00:19:42.728 } 00:19:42.728 ]' 00:19:42.728 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.989 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.989 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.989 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:42.989 09:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.989 09:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.989 09:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.989 09:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.249 09:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZDJlYjgxZGUzMmRlNjFiM2MyMWZmNzlkYWIzNTBmYWU3NmNkNmEwMjU5MGZjOTAyb6nGcQ==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5NGU3OGIyYzBhMWYzY2FhMTg3MGVhZmFlNjhjMGINfRK5: 00:19:43.821 09:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.821 09:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:43.821 09:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.821 09:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.821 09:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.821 09:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.821 09:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:43.821 09:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:43.821 09:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:43.821 09:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.822 09:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:43.822 09:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:43.822 09:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:43.822 09:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.822 09:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:43.822 09:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.822 09:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.822 09:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.822 09:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.822 09:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.393 00:19:44.393 09:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.393 09:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.393 09:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.655 09:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.655 09:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.655 09:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.655 09:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.655 09:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.655 09:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.655 { 00:19:44.655 "cntlid": 47, 00:19:44.655 "qid": 0, 00:19:44.655 "state": "enabled", 00:19:44.655 "thread": "nvmf_tgt_poll_group_000", 00:19:44.655 "listen_address": { 00:19:44.655 "trtype": "TCP", 00:19:44.655 "adrfam": "IPv4", 00:19:44.655 "traddr": "10.0.0.2", 00:19:44.655 "trsvcid": "4420" 00:19:44.655 }, 00:19:44.655 "peer_address": { 00:19:44.655 "trtype": "TCP", 00:19:44.655 "adrfam": "IPv4", 00:19:44.655 "traddr": "10.0.0.1", 00:19:44.655 "trsvcid": "45328" 00:19:44.655 }, 00:19:44.655 "auth": { 00:19:44.655 "state": "completed", 00:19:44.655 "digest": "sha256", 00:19:44.655 "dhgroup": "ffdhe8192" 00:19:44.655 } 00:19:44.655 } 00:19:44.655 ]' 00:19:44.655 09:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.655 09:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.655 09:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.655 09:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:44.655 09:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.655 09:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.655 09:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.655 09:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.916 09:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:MDc1YTFkZmUzMjEzODgxNjFlNDg5OGNkOGJiYzQ5ZTUyOTQ3NTc0MzdiMjNjMDgzMzhjYTFkMjRmNzBhY2NhYq3ofEE=: 00:19:45.489 09:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.489 09:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:45.489 09:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.489 09:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.489 09:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.489 09:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:45.489 09:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:45.489 09:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.489 09:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:45.489 09:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:45.751 09:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:45.751 09:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.751 09:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:45.751 09:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:45.751 09:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:45.751 09:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.751 09:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.751 09:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.751 09:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.751 09:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.751 09:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.751 09:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.751 00:19:46.012 09:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.012 09:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.012 09:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.012 09:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.012 09:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.012 09:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.012 09:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.012 09:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.012 09:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.012 { 00:19:46.012 "cntlid": 49, 00:19:46.012 "qid": 0, 00:19:46.012 "state": "enabled", 00:19:46.012 "thread": "nvmf_tgt_poll_group_000", 00:19:46.012 "listen_address": { 00:19:46.012 "trtype": "TCP", 00:19:46.012 "adrfam": "IPv4", 00:19:46.012 "traddr": "10.0.0.2", 00:19:46.012 "trsvcid": "4420" 00:19:46.012 }, 00:19:46.012 "peer_address": { 00:19:46.012 "trtype": "TCP", 00:19:46.012 "adrfam": "IPv4", 00:19:46.012 "traddr": "10.0.0.1", 00:19:46.012 "trsvcid": "45364" 00:19:46.012 }, 00:19:46.012 "auth": { 00:19:46.012 "state": "completed", 00:19:46.012 "digest": "sha384", 00:19:46.012 "dhgroup": "null" 00:19:46.012 } 00:19:46.012 } 00:19:46.012 ]' 00:19:46.012 09:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.012 09:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.012 09:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.273 09:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:46.273 09:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.273 09:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.273 09:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.273 09:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.273 09:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTdlOGRjMTUxZWQ3OTg2MjRjZTQwNjg5ZmZmZGY1ZDJiYjVlMGVhN2M5Y2JjNzM2wwp8yw==: --dhchap-ctrl-secret DHHC-1:03:YTFmZWQxOTU0MzEwZDY0OGE1ZTQzOGQ0M2Q2NmY0MmJhZWZmNDU2NDA5MzBkODAzZTgzOWZjMjM1ZDI3NWI4ZksrVuA=: 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.297 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.558 00:19:47.558 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.558 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.558 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.558 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.558 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.558 09:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.558 09:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.558 09:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.558 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.558 { 00:19:47.558 "cntlid": 51, 00:19:47.558 "qid": 0, 00:19:47.558 "state": "enabled", 00:19:47.558 "thread": "nvmf_tgt_poll_group_000", 00:19:47.558 "listen_address": { 00:19:47.558 "trtype": "TCP", 00:19:47.558 "adrfam": "IPv4", 00:19:47.558 "traddr": "10.0.0.2", 00:19:47.558 "trsvcid": "4420" 00:19:47.558 }, 00:19:47.558 "peer_address": { 00:19:47.558 "trtype": "TCP", 00:19:47.558 "adrfam": "IPv4", 00:19:47.558 "traddr": "10.0.0.1", 00:19:47.558 "trsvcid": "41972" 00:19:47.558 }, 00:19:47.558 "auth": { 00:19:47.559 "state": "completed", 00:19:47.559 "digest": "sha384", 00:19:47.559 "dhgroup": "null" 00:19:47.559 } 00:19:47.559 } 00:19:47.559 ]' 00:19:47.559 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.559 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.819 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.819 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:47.819 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.819 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.819 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.819 09:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.819 09:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjYyOGZiNTAyZTBhZDYxMzEwOGI4Yzg5ZmNkMTYzMDfXT6l9: --dhchap-ctrl-secret DHHC-1:02:NDAzNTFiOWUyOGFhNjhlNGI5NjMxZGFjZjc3OTczMjAyN2Q0YjdmMWYyMWRiOTQ0QIEdSw==: 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.763 09:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.025 00:19:49.025 09:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.025 09:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.025 09:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.287 09:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.287 09:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.287 09:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.287 09:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.287 09:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.287 09:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.287 { 00:19:49.287 "cntlid": 53, 00:19:49.287 "qid": 0, 00:19:49.287 "state": "enabled", 00:19:49.287 "thread": "nvmf_tgt_poll_group_000", 00:19:49.287 "listen_address": { 00:19:49.287 "trtype": "TCP", 00:19:49.287 "adrfam": "IPv4", 00:19:49.287 "traddr": "10.0.0.2", 00:19:49.287 "trsvcid": "4420" 00:19:49.287 }, 00:19:49.287 "peer_address": { 00:19:49.287 "trtype": "TCP", 00:19:49.287 "adrfam": "IPv4", 00:19:49.287 "traddr": "10.0.0.1", 00:19:49.287 "trsvcid": "42006" 00:19:49.287 }, 00:19:49.287 "auth": { 00:19:49.287 "state": "completed", 00:19:49.287 "digest": "sha384", 00:19:49.287 "dhgroup": "null" 00:19:49.287 } 00:19:49.287 } 00:19:49.287 ]' 00:19:49.287 09:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.287 09:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.287 09:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.287 09:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:49.287 09:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.287 09:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.287 09:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.287 09:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.549 09:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZDJlYjgxZGUzMmRlNjFiM2MyMWZmNzlkYWIzNTBmYWU3NmNkNmEwMjU5MGZjOTAyb6nGcQ==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5NGU3OGIyYzBhMWYzY2FhMTg3MGVhZmFlNjhjMGINfRK5: 00:19:50.122 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.122 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:50.122 09:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.122 09:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.122 09:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.122 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.122 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:50.122 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:50.384 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:50.384 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.384 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:50.384 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:50.384 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:50.384 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.384 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:50.384 09:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.384 09:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.384 09:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.384 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.384 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.644 00:19:50.644 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.644 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.644 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.644 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.644 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.644 09:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.644 09:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.644 09:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.644 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.644 { 00:19:50.644 "cntlid": 55, 00:19:50.644 "qid": 0, 00:19:50.644 "state": "enabled", 00:19:50.644 "thread": "nvmf_tgt_poll_group_000", 00:19:50.644 "listen_address": { 00:19:50.644 "trtype": "TCP", 00:19:50.644 "adrfam": "IPv4", 00:19:50.644 "traddr": "10.0.0.2", 00:19:50.644 "trsvcid": "4420" 00:19:50.644 }, 00:19:50.644 "peer_address": { 00:19:50.644 "trtype": "TCP", 00:19:50.644 "adrfam": "IPv4", 00:19:50.644 "traddr": "10.0.0.1", 00:19:50.644 "trsvcid": "42026" 00:19:50.644 }, 00:19:50.644 "auth": { 00:19:50.644 "state": "completed", 00:19:50.644 "digest": "sha384", 00:19:50.644 "dhgroup": "null" 00:19:50.644 } 00:19:50.644 } 00:19:50.644 ]' 00:19:50.644 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.644 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.644 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.906 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:50.906 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.906 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.906 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.906 09:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.906 09:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:MDc1YTFkZmUzMjEzODgxNjFlNDg5OGNkOGJiYzQ5ZTUyOTQ3NTc0MzdiMjNjMDgzMzhjYTFkMjRmNzBhY2NhYq3ofEE=: 00:19:51.848 09:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.849 09:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.110 00:19:52.110 09:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.110 09:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.110 09:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.370 09:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.370 09:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.370 09:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.371 09:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.371 09:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.371 09:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.371 { 00:19:52.371 "cntlid": 57, 00:19:52.371 "qid": 0, 00:19:52.371 "state": "enabled", 00:19:52.371 "thread": "nvmf_tgt_poll_group_000", 00:19:52.371 "listen_address": { 00:19:52.371 "trtype": "TCP", 00:19:52.371 "adrfam": "IPv4", 00:19:52.371 "traddr": "10.0.0.2", 00:19:52.371 "trsvcid": "4420" 00:19:52.371 }, 00:19:52.371 "peer_address": { 00:19:52.371 "trtype": "TCP", 00:19:52.371 "adrfam": "IPv4", 00:19:52.371 "traddr": "10.0.0.1", 00:19:52.371 "trsvcid": "42048" 00:19:52.371 }, 00:19:52.371 "auth": { 00:19:52.371 "state": "completed", 00:19:52.371 "digest": "sha384", 00:19:52.371 "dhgroup": "ffdhe2048" 00:19:52.371 } 00:19:52.371 } 00:19:52.371 ]' 00:19:52.371 09:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.371 09:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.371 09:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.371 09:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:52.371 09:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.371 09:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.371 09:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.371 09:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.632 09:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTdlOGRjMTUxZWQ3OTg2MjRjZTQwNjg5ZmZmZGY1ZDJiYjVlMGVhN2M5Y2JjNzM2wwp8yw==: --dhchap-ctrl-secret DHHC-1:03:YTFmZWQxOTU0MzEwZDY0OGE1ZTQzOGQ0M2Q2NmY0MmJhZWZmNDU2NDA5MzBkODAzZTgzOWZjMjM1ZDI3NWI4ZksrVuA=: 00:19:53.205 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.205 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:53.205 09:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.205 09:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.205 09:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.205 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.205 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:53.205 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:53.466 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:53.466 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.466 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:53.466 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:53.466 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:53.466 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.466 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.466 09:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.466 09:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.466 09:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.466 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.466 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.728 00:19:53.728 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.728 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.728 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.728 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.728 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.728 09:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.728 09:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.728 09:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.728 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.728 { 00:19:53.728 "cntlid": 59, 00:19:53.728 "qid": 0, 00:19:53.728 "state": "enabled", 00:19:53.728 "thread": "nvmf_tgt_poll_group_000", 00:19:53.728 "listen_address": { 00:19:53.728 "trtype": "TCP", 00:19:53.728 "adrfam": "IPv4", 00:19:53.728 "traddr": "10.0.0.2", 00:19:53.728 "trsvcid": "4420" 00:19:53.728 }, 00:19:53.728 "peer_address": { 00:19:53.728 "trtype": "TCP", 00:19:53.728 "adrfam": "IPv4", 00:19:53.728 "traddr": "10.0.0.1", 00:19:53.728 "trsvcid": "42086" 00:19:53.728 }, 00:19:53.728 "auth": { 00:19:53.728 "state": "completed", 00:19:53.728 "digest": "sha384", 00:19:53.728 "dhgroup": "ffdhe2048" 00:19:53.728 } 00:19:53.728 } 00:19:53.728 ]' 00:19:53.728 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.989 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.989 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.989 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:53.989 09:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.989 09:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.989 09:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.989 09:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.251 09:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjYyOGZiNTAyZTBhZDYxMzEwOGI4Yzg5ZmNkMTYzMDfXT6l9: --dhchap-ctrl-secret DHHC-1:02:NDAzNTFiOWUyOGFhNjhlNGI5NjMxZGFjZjc3OTczMjAyN2Q0YjdmMWYyMWRiOTQ0QIEdSw==: 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.822 09:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.082 00:19:55.082 09:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.082 09:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.082 09:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.343 09:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.343 09:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.343 09:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.343 09:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.343 09:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.343 09:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.343 { 00:19:55.343 "cntlid": 61, 00:19:55.343 "qid": 0, 00:19:55.343 "state": "enabled", 00:19:55.343 "thread": "nvmf_tgt_poll_group_000", 00:19:55.343 "listen_address": { 00:19:55.343 "trtype": "TCP", 00:19:55.343 "adrfam": "IPv4", 00:19:55.343 "traddr": "10.0.0.2", 00:19:55.343 "trsvcid": "4420" 00:19:55.343 }, 00:19:55.343 "peer_address": { 00:19:55.343 "trtype": "TCP", 00:19:55.343 "adrfam": "IPv4", 00:19:55.343 "traddr": "10.0.0.1", 00:19:55.343 "trsvcid": "42108" 00:19:55.343 }, 00:19:55.343 "auth": { 00:19:55.343 "state": "completed", 00:19:55.343 "digest": "sha384", 00:19:55.343 "dhgroup": "ffdhe2048" 00:19:55.343 } 00:19:55.343 } 00:19:55.343 ]' 00:19:55.343 09:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.343 09:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.343 09:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.343 09:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:55.343 09:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.343 09:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.343 09:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.343 09:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.603 09:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZDJlYjgxZGUzMmRlNjFiM2MyMWZmNzlkYWIzNTBmYWU3NmNkNmEwMjU5MGZjOTAyb6nGcQ==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5NGU3OGIyYzBhMWYzY2FhMTg3MGVhZmFlNjhjMGINfRK5: 00:19:56.176 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.176 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:56.176 09:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.176 09:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.438 09:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.438 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.438 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:56.438 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:56.438 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:56.438 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.438 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:56.438 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:56.438 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:56.438 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.438 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:56.438 09:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.438 09:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.438 09:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.438 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.438 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.698 00:19:56.698 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.698 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.698 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.958 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.958 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.958 09:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.958 09:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.958 09:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.958 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.958 { 00:19:56.958 "cntlid": 63, 00:19:56.958 "qid": 0, 00:19:56.958 "state": "enabled", 00:19:56.958 "thread": "nvmf_tgt_poll_group_000", 00:19:56.958 "listen_address": { 00:19:56.958 "trtype": "TCP", 00:19:56.958 "adrfam": "IPv4", 00:19:56.958 "traddr": "10.0.0.2", 00:19:56.958 "trsvcid": "4420" 00:19:56.958 }, 00:19:56.958 "peer_address": { 00:19:56.958 "trtype": "TCP", 00:19:56.958 "adrfam": "IPv4", 00:19:56.958 "traddr": "10.0.0.1", 00:19:56.958 "trsvcid": "37698" 00:19:56.958 }, 00:19:56.958 "auth": { 00:19:56.958 "state": "completed", 00:19:56.958 "digest": "sha384", 00:19:56.958 "dhgroup": "ffdhe2048" 00:19:56.958 } 00:19:56.958 } 00:19:56.958 ]' 00:19:56.958 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.958 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.958 09:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.958 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:56.958 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.958 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.958 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.958 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.218 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:MDc1YTFkZmUzMjEzODgxNjFlNDg5OGNkOGJiYzQ5ZTUyOTQ3NTc0MzdiMjNjMDgzMzhjYTFkMjRmNzBhY2NhYq3ofEE=: 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.788 09:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.049 00:19:58.049 09:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.049 09:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.049 09:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.310 09:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.310 09:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.310 09:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.311 09:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.311 09:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.311 09:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.311 { 00:19:58.311 "cntlid": 65, 00:19:58.311 "qid": 0, 00:19:58.311 "state": "enabled", 00:19:58.311 "thread": "nvmf_tgt_poll_group_000", 00:19:58.311 "listen_address": { 00:19:58.311 "trtype": "TCP", 00:19:58.311 "adrfam": "IPv4", 00:19:58.311 "traddr": "10.0.0.2", 00:19:58.311 "trsvcid": "4420" 00:19:58.311 }, 00:19:58.311 "peer_address": { 00:19:58.311 "trtype": "TCP", 00:19:58.311 "adrfam": "IPv4", 00:19:58.311 "traddr": "10.0.0.1", 00:19:58.311 "trsvcid": "37722" 00:19:58.311 }, 00:19:58.311 "auth": { 00:19:58.311 "state": "completed", 00:19:58.311 "digest": "sha384", 00:19:58.311 "dhgroup": "ffdhe3072" 00:19:58.311 } 00:19:58.311 } 00:19:58.311 ]' 00:19:58.311 09:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.311 09:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.311 09:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.311 09:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:58.311 09:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.571 09:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.571 09:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.571 09:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.571 09:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTdlOGRjMTUxZWQ3OTg2MjRjZTQwNjg5ZmZmZGY1ZDJiYjVlMGVhN2M5Y2JjNzM2wwp8yw==: --dhchap-ctrl-secret DHHC-1:03:YTFmZWQxOTU0MzEwZDY0OGE1ZTQzOGQ0M2Q2NmY0MmJhZWZmNDU2NDA5MzBkODAzZTgzOWZjMjM1ZDI3NWI4ZksrVuA=: 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.514 09:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.775 00:19:59.775 09:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.775 09:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.775 09:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.036 09:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.036 09:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.036 09:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.036 09:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.036 09:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.036 09:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.036 { 00:20:00.036 "cntlid": 67, 00:20:00.036 "qid": 0, 00:20:00.036 "state": "enabled", 00:20:00.036 "thread": "nvmf_tgt_poll_group_000", 00:20:00.036 "listen_address": { 00:20:00.036 "trtype": "TCP", 00:20:00.036 "adrfam": "IPv4", 00:20:00.036 "traddr": "10.0.0.2", 00:20:00.036 "trsvcid": "4420" 00:20:00.036 }, 00:20:00.036 "peer_address": { 00:20:00.036 "trtype": "TCP", 00:20:00.036 "adrfam": "IPv4", 00:20:00.036 "traddr": "10.0.0.1", 00:20:00.036 "trsvcid": "37746" 00:20:00.036 }, 00:20:00.036 "auth": { 00:20:00.036 "state": "completed", 00:20:00.036 "digest": "sha384", 00:20:00.036 "dhgroup": "ffdhe3072" 00:20:00.036 } 00:20:00.036 } 00:20:00.036 ]' 00:20:00.036 09:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.036 09:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.036 09:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.036 09:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:00.036 09:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.036 09:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.036 09:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.036 09:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.295 09:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjYyOGZiNTAyZTBhZDYxMzEwOGI4Yzg5ZmNkMTYzMDfXT6l9: --dhchap-ctrl-secret DHHC-1:02:NDAzNTFiOWUyOGFhNjhlNGI5NjMxZGFjZjc3OTczMjAyN2Q0YjdmMWYyMWRiOTQ0QIEdSw==: 00:20:00.865 09:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.865 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:00.865 09:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.865 09:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.865 09:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.865 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.865 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:00.865 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:01.125 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:01.125 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.125 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:01.125 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:01.125 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:01.125 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.125 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.125 09:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.125 09:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.125 09:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.125 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.125 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.385 00:20:01.385 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.385 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.385 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.695 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.695 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.695 09:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.695 09:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.695 09:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.695 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.695 { 00:20:01.695 "cntlid": 69, 00:20:01.695 "qid": 0, 00:20:01.695 "state": "enabled", 00:20:01.695 "thread": "nvmf_tgt_poll_group_000", 00:20:01.695 "listen_address": { 00:20:01.695 "trtype": "TCP", 00:20:01.695 "adrfam": "IPv4", 00:20:01.695 "traddr": "10.0.0.2", 00:20:01.695 "trsvcid": "4420" 00:20:01.695 }, 00:20:01.695 "peer_address": { 00:20:01.695 "trtype": "TCP", 00:20:01.695 "adrfam": "IPv4", 00:20:01.695 "traddr": "10.0.0.1", 00:20:01.695 "trsvcid": "37770" 00:20:01.695 }, 00:20:01.695 "auth": { 00:20:01.695 "state": "completed", 00:20:01.695 "digest": "sha384", 00:20:01.695 "dhgroup": "ffdhe3072" 00:20:01.695 } 00:20:01.696 } 00:20:01.696 ]' 00:20:01.696 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.696 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.696 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.696 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:01.696 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.696 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.696 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.696 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.956 09:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZDJlYjgxZGUzMmRlNjFiM2MyMWZmNzlkYWIzNTBmYWU3NmNkNmEwMjU5MGZjOTAyb6nGcQ==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5NGU3OGIyYzBhMWYzY2FhMTg3MGVhZmFlNjhjMGINfRK5: 00:20:02.528 09:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.528 09:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:02.528 09:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.528 09:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.528 09:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.528 09:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.528 09:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:02.528 09:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:02.789 09:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:02.789 09:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.789 09:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:02.789 09:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:02.789 09:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:02.789 09:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.789 09:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:02.789 09:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.789 09:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.789 09:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.789 09:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.789 09:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.050 00:20:03.050 09:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.050 09:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.050 09:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.050 09:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.050 09:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.050 09:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.050 09:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.050 09:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.050 09:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.050 { 00:20:03.050 "cntlid": 71, 00:20:03.050 "qid": 0, 00:20:03.050 "state": "enabled", 00:20:03.050 "thread": "nvmf_tgt_poll_group_000", 00:20:03.050 "listen_address": { 00:20:03.050 "trtype": "TCP", 00:20:03.050 "adrfam": "IPv4", 00:20:03.050 "traddr": "10.0.0.2", 00:20:03.050 "trsvcid": "4420" 00:20:03.050 }, 00:20:03.050 "peer_address": { 00:20:03.050 "trtype": "TCP", 00:20:03.050 "adrfam": "IPv4", 00:20:03.050 "traddr": "10.0.0.1", 00:20:03.050 "trsvcid": "37786" 00:20:03.050 }, 00:20:03.050 "auth": { 00:20:03.050 "state": "completed", 00:20:03.050 "digest": "sha384", 00:20:03.050 "dhgroup": "ffdhe3072" 00:20:03.050 } 00:20:03.050 } 00:20:03.050 ]' 00:20:03.050 09:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.310 09:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.310 09:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.310 09:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:03.310 09:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.310 09:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.310 09:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.310 09:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.570 09:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:MDc1YTFkZmUzMjEzODgxNjFlNDg5OGNkOGJiYzQ5ZTUyOTQ3NTc0MzdiMjNjMDgzMzhjYTFkMjRmNzBhY2NhYq3ofEE=: 00:20:04.143 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.143 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:04.143 09:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.143 09:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.143 09:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.143 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.143 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.143 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:04.143 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:04.404 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:04.404 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.404 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:04.404 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:04.404 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:04.404 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.404 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.404 09:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.404 09:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.404 09:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.404 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.404 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.664 00:20:04.665 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.665 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.665 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.665 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.665 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.665 09:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.665 09:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.665 09:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.665 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.665 { 00:20:04.665 "cntlid": 73, 00:20:04.665 "qid": 0, 00:20:04.665 "state": "enabled", 00:20:04.665 "thread": "nvmf_tgt_poll_group_000", 00:20:04.665 "listen_address": { 00:20:04.665 "trtype": "TCP", 00:20:04.665 "adrfam": "IPv4", 00:20:04.665 "traddr": "10.0.0.2", 00:20:04.665 "trsvcid": "4420" 00:20:04.665 }, 00:20:04.665 "peer_address": { 00:20:04.665 "trtype": "TCP", 00:20:04.665 "adrfam": "IPv4", 00:20:04.665 "traddr": "10.0.0.1", 00:20:04.665 "trsvcid": "37812" 00:20:04.665 }, 00:20:04.665 "auth": { 00:20:04.665 "state": "completed", 00:20:04.665 "digest": "sha384", 00:20:04.665 "dhgroup": "ffdhe4096" 00:20:04.665 } 00:20:04.665 } 00:20:04.665 ]' 00:20:04.665 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.926 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.926 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.926 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:04.926 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.926 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.926 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.926 09:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.926 09:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTdlOGRjMTUxZWQ3OTg2MjRjZTQwNjg5ZmZmZGY1ZDJiYjVlMGVhN2M5Y2JjNzM2wwp8yw==: --dhchap-ctrl-secret DHHC-1:03:YTFmZWQxOTU0MzEwZDY0OGE1ZTQzOGQ0M2Q2NmY0MmJhZWZmNDU2NDA5MzBkODAzZTgzOWZjMjM1ZDI3NWI4ZksrVuA=: 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.868 09:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.128 00:20:06.128 09:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.128 09:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.128 09:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.392 09:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.392 09:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.392 09:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.392 09:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.392 09:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.392 09:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.392 { 00:20:06.392 "cntlid": 75, 00:20:06.392 "qid": 0, 00:20:06.392 "state": "enabled", 00:20:06.392 "thread": "nvmf_tgt_poll_group_000", 00:20:06.392 "listen_address": { 00:20:06.392 "trtype": "TCP", 00:20:06.392 "adrfam": "IPv4", 00:20:06.392 "traddr": "10.0.0.2", 00:20:06.392 "trsvcid": "4420" 00:20:06.392 }, 00:20:06.392 "peer_address": { 00:20:06.392 "trtype": "TCP", 00:20:06.392 "adrfam": "IPv4", 00:20:06.392 "traddr": "10.0.0.1", 00:20:06.392 "trsvcid": "37834" 00:20:06.392 }, 00:20:06.392 "auth": { 00:20:06.392 "state": "completed", 00:20:06.392 "digest": "sha384", 00:20:06.392 "dhgroup": "ffdhe4096" 00:20:06.392 } 00:20:06.392 } 00:20:06.392 ]' 00:20:06.392 09:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.392 09:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.392 09:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.392 09:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:06.392 09:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.392 09:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.392 09:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.392 09:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.683 09:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjYyOGZiNTAyZTBhZDYxMzEwOGI4Yzg5ZmNkMTYzMDfXT6l9: --dhchap-ctrl-secret DHHC-1:02:NDAzNTFiOWUyOGFhNjhlNGI5NjMxZGFjZjc3OTczMjAyN2Q0YjdmMWYyMWRiOTQ0QIEdSw==: 00:20:07.260 09:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.260 09:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:07.260 09:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.260 09:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.260 09:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.260 09:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.260 09:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:07.260 09:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:07.520 09:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:07.520 09:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.520 09:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:07.520 09:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:07.520 09:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:07.520 09:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.520 09:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.520 09:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.520 09:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.520 09:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.520 09:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.520 09:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.781 00:20:07.781 09:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.781 09:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.781 09:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.042 09:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.042 09:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.042 09:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.042 09:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.042 09:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.042 09:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.042 { 00:20:08.042 "cntlid": 77, 00:20:08.042 "qid": 0, 00:20:08.042 "state": "enabled", 00:20:08.042 "thread": "nvmf_tgt_poll_group_000", 00:20:08.042 "listen_address": { 00:20:08.042 "trtype": "TCP", 00:20:08.042 "adrfam": "IPv4", 00:20:08.042 "traddr": "10.0.0.2", 00:20:08.043 "trsvcid": "4420" 00:20:08.043 }, 00:20:08.043 "peer_address": { 00:20:08.043 "trtype": "TCP", 00:20:08.043 "adrfam": "IPv4", 00:20:08.043 "traddr": "10.0.0.1", 00:20:08.043 "trsvcid": "55860" 00:20:08.043 }, 00:20:08.043 "auth": { 00:20:08.043 "state": "completed", 00:20:08.043 "digest": "sha384", 00:20:08.043 "dhgroup": "ffdhe4096" 00:20:08.043 } 00:20:08.043 } 00:20:08.043 ]' 00:20:08.043 09:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.043 09:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.043 09:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.043 09:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:08.043 09:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.043 09:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.043 09:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.043 09:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.304 09:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZDJlYjgxZGUzMmRlNjFiM2MyMWZmNzlkYWIzNTBmYWU3NmNkNmEwMjU5MGZjOTAyb6nGcQ==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5NGU3OGIyYzBhMWYzY2FhMTg3MGVhZmFlNjhjMGINfRK5: 00:20:08.876 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.876 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:08.876 09:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.876 09:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.876 09:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.876 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.876 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:08.876 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:09.136 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:09.136 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.136 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.136 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:09.136 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:09.136 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.136 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:09.136 09:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.136 09:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.136 09:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.136 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.136 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.397 00:20:09.397 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.398 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.398 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.658 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.658 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.658 09:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.658 09:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.658 09:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.658 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.658 { 00:20:09.658 "cntlid": 79, 00:20:09.658 "qid": 0, 00:20:09.658 "state": "enabled", 00:20:09.658 "thread": "nvmf_tgt_poll_group_000", 00:20:09.658 "listen_address": { 00:20:09.658 "trtype": "TCP", 00:20:09.658 "adrfam": "IPv4", 00:20:09.658 "traddr": "10.0.0.2", 00:20:09.658 "trsvcid": "4420" 00:20:09.658 }, 00:20:09.658 "peer_address": { 00:20:09.658 "trtype": "TCP", 00:20:09.658 "adrfam": "IPv4", 00:20:09.658 "traddr": "10.0.0.1", 00:20:09.658 "trsvcid": "55894" 00:20:09.658 }, 00:20:09.658 "auth": { 00:20:09.658 "state": "completed", 00:20:09.658 "digest": "sha384", 00:20:09.658 "dhgroup": "ffdhe4096" 00:20:09.658 } 00:20:09.658 } 00:20:09.658 ]' 00:20:09.658 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.658 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.658 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.658 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:09.658 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.658 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.658 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.658 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.919 09:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:MDc1YTFkZmUzMjEzODgxNjFlNDg5OGNkOGJiYzQ5ZTUyOTQ3NTc0MzdiMjNjMDgzMzhjYTFkMjRmNzBhY2NhYq3ofEE=: 00:20:10.491 09:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.491 09:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:10.491 09:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.491 09:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.492 09:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.492 09:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.492 09:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.492 09:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:10.492 09:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:10.752 09:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:10.752 09:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.752 09:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:10.752 09:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:10.752 09:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:10.752 09:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.752 09:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.752 09:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.752 09:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.752 09:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.753 09:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.753 09:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.014 00:20:11.276 09:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.276 09:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.276 09:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.276 09:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.276 09:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.276 09:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.276 09:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.276 09:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.276 09:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.276 { 00:20:11.276 "cntlid": 81, 00:20:11.276 "qid": 0, 00:20:11.276 "state": "enabled", 00:20:11.276 "thread": "nvmf_tgt_poll_group_000", 00:20:11.276 "listen_address": { 00:20:11.276 "trtype": "TCP", 00:20:11.276 "adrfam": "IPv4", 00:20:11.276 "traddr": "10.0.0.2", 00:20:11.276 "trsvcid": "4420" 00:20:11.276 }, 00:20:11.276 "peer_address": { 00:20:11.276 "trtype": "TCP", 00:20:11.276 "adrfam": "IPv4", 00:20:11.276 "traddr": "10.0.0.1", 00:20:11.276 "trsvcid": "55934" 00:20:11.276 }, 00:20:11.276 "auth": { 00:20:11.276 "state": "completed", 00:20:11.276 "digest": "sha384", 00:20:11.276 "dhgroup": "ffdhe6144" 00:20:11.276 } 00:20:11.276 } 00:20:11.276 ]' 00:20:11.276 09:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.276 09:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.276 09:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.538 09:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:11.538 09:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.538 09:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.538 09:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.538 09:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.538 09:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTdlOGRjMTUxZWQ3OTg2MjRjZTQwNjg5ZmZmZGY1ZDJiYjVlMGVhN2M5Y2JjNzM2wwp8yw==: --dhchap-ctrl-secret DHHC-1:03:YTFmZWQxOTU0MzEwZDY0OGE1ZTQzOGQ0M2Q2NmY0MmJhZWZmNDU2NDA5MzBkODAzZTgzOWZjMjM1ZDI3NWI4ZksrVuA=: 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.481 09:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.742 00:20:12.742 09:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.742 09:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.742 09:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.004 09:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.004 09:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.004 09:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.004 09:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.004 09:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.004 09:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.004 { 00:20:13.004 "cntlid": 83, 00:20:13.004 "qid": 0, 00:20:13.004 "state": "enabled", 00:20:13.004 "thread": "nvmf_tgt_poll_group_000", 00:20:13.004 "listen_address": { 00:20:13.004 "trtype": "TCP", 00:20:13.004 "adrfam": "IPv4", 00:20:13.004 "traddr": "10.0.0.2", 00:20:13.004 "trsvcid": "4420" 00:20:13.004 }, 00:20:13.004 "peer_address": { 00:20:13.004 "trtype": "TCP", 00:20:13.004 "adrfam": "IPv4", 00:20:13.004 "traddr": "10.0.0.1", 00:20:13.004 "trsvcid": "55954" 00:20:13.004 }, 00:20:13.004 "auth": { 00:20:13.004 "state": "completed", 00:20:13.004 "digest": "sha384", 00:20:13.004 "dhgroup": "ffdhe6144" 00:20:13.004 } 00:20:13.004 } 00:20:13.004 ]' 00:20:13.004 09:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.004 09:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.004 09:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.264 09:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:13.264 09:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.264 09:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.264 09:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.264 09:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.264 09:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjYyOGZiNTAyZTBhZDYxMzEwOGI4Yzg5ZmNkMTYzMDfXT6l9: --dhchap-ctrl-secret DHHC-1:02:NDAzNTFiOWUyOGFhNjhlNGI5NjMxZGFjZjc3OTczMjAyN2Q0YjdmMWYyMWRiOTQ0QIEdSw==: 00:20:14.208 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.208 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:14.208 09:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.208 09:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.208 09:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.208 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.208 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:14.208 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:14.208 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:14.208 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.208 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:14.208 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:14.209 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:14.209 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.209 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.209 09:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.209 09:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.209 09:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.209 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.209 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.469 00:20:14.731 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.731 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.731 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.731 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.731 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.731 09:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.731 09:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.731 09:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.731 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.731 { 00:20:14.731 "cntlid": 85, 00:20:14.731 "qid": 0, 00:20:14.731 "state": "enabled", 00:20:14.731 "thread": "nvmf_tgt_poll_group_000", 00:20:14.731 "listen_address": { 00:20:14.731 "trtype": "TCP", 00:20:14.731 "adrfam": "IPv4", 00:20:14.731 "traddr": "10.0.0.2", 00:20:14.731 "trsvcid": "4420" 00:20:14.731 }, 00:20:14.731 "peer_address": { 00:20:14.731 "trtype": "TCP", 00:20:14.731 "adrfam": "IPv4", 00:20:14.731 "traddr": "10.0.0.1", 00:20:14.731 "trsvcid": "55964" 00:20:14.731 }, 00:20:14.731 "auth": { 00:20:14.731 "state": "completed", 00:20:14.731 "digest": "sha384", 00:20:14.731 "dhgroup": "ffdhe6144" 00:20:14.731 } 00:20:14.731 } 00:20:14.731 ]' 00:20:14.731 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.731 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.731 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.991 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:14.991 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.991 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.991 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.991 09:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.991 09:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZDJlYjgxZGUzMmRlNjFiM2MyMWZmNzlkYWIzNTBmYWU3NmNkNmEwMjU5MGZjOTAyb6nGcQ==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5NGU3OGIyYzBhMWYzY2FhMTg3MGVhZmFlNjhjMGINfRK5: 00:20:15.931 09:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.931 09:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:15.931 09:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.931 09:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.931 09:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.931 09:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.931 09:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:15.931 09:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:15.931 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:15.931 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.931 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:15.931 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:15.931 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:15.931 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.931 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:15.931 09:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.931 09:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.931 09:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.931 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.931 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:16.191 00:20:16.191 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.191 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.191 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.451 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.451 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.451 09:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.451 09:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.451 09:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.451 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.451 { 00:20:16.451 "cntlid": 87, 00:20:16.451 "qid": 0, 00:20:16.451 "state": "enabled", 00:20:16.451 "thread": "nvmf_tgt_poll_group_000", 00:20:16.451 "listen_address": { 00:20:16.451 "trtype": "TCP", 00:20:16.451 "adrfam": "IPv4", 00:20:16.451 "traddr": "10.0.0.2", 00:20:16.451 "trsvcid": "4420" 00:20:16.451 }, 00:20:16.451 "peer_address": { 00:20:16.451 "trtype": "TCP", 00:20:16.451 "adrfam": "IPv4", 00:20:16.451 "traddr": "10.0.0.1", 00:20:16.451 "trsvcid": "55990" 00:20:16.451 }, 00:20:16.451 "auth": { 00:20:16.451 "state": "completed", 00:20:16.451 "digest": "sha384", 00:20:16.451 "dhgroup": "ffdhe6144" 00:20:16.451 } 00:20:16.451 } 00:20:16.451 ]' 00:20:16.451 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.451 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.451 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.451 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:16.451 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.711 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.711 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.711 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.711 09:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:MDc1YTFkZmUzMjEzODgxNjFlNDg5OGNkOGJiYzQ5ZTUyOTQ3NTc0MzdiMjNjMDgzMzhjYTFkMjRmNzBhY2NhYq3ofEE=: 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.651 09:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.221 00:20:18.221 09:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.221 09:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.221 09:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.221 09:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.221 09:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.221 09:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.221 09:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.221 09:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.221 09:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.221 { 00:20:18.221 "cntlid": 89, 00:20:18.221 "qid": 0, 00:20:18.221 "state": "enabled", 00:20:18.221 "thread": "nvmf_tgt_poll_group_000", 00:20:18.221 "listen_address": { 00:20:18.221 "trtype": "TCP", 00:20:18.221 "adrfam": "IPv4", 00:20:18.221 "traddr": "10.0.0.2", 00:20:18.221 "trsvcid": "4420" 00:20:18.221 }, 00:20:18.221 "peer_address": { 00:20:18.221 "trtype": "TCP", 00:20:18.221 "adrfam": "IPv4", 00:20:18.221 "traddr": "10.0.0.1", 00:20:18.221 "trsvcid": "44040" 00:20:18.221 }, 00:20:18.221 "auth": { 00:20:18.221 "state": "completed", 00:20:18.221 "digest": "sha384", 00:20:18.221 "dhgroup": "ffdhe8192" 00:20:18.221 } 00:20:18.221 } 00:20:18.221 ]' 00:20:18.221 09:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.221 09:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.481 09:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.481 09:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:18.481 09:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.481 09:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.481 09:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.481 09:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.481 09:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTdlOGRjMTUxZWQ3OTg2MjRjZTQwNjg5ZmZmZGY1ZDJiYjVlMGVhN2M5Y2JjNzM2wwp8yw==: --dhchap-ctrl-secret DHHC-1:03:YTFmZWQxOTU0MzEwZDY0OGE1ZTQzOGQ0M2Q2NmY0MmJhZWZmNDU2NDA5MzBkODAzZTgzOWZjMjM1ZDI3NWI4ZksrVuA=: 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.422 09:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.992 00:20:19.992 09:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.992 09:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.992 09:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.252 09:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.252 09:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.252 09:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.253 09:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.253 09:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.253 09:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.253 { 00:20:20.253 "cntlid": 91, 00:20:20.253 "qid": 0, 00:20:20.253 "state": "enabled", 00:20:20.253 "thread": "nvmf_tgt_poll_group_000", 00:20:20.253 "listen_address": { 00:20:20.253 "trtype": "TCP", 00:20:20.253 "adrfam": "IPv4", 00:20:20.253 "traddr": "10.0.0.2", 00:20:20.253 "trsvcid": "4420" 00:20:20.253 }, 00:20:20.253 "peer_address": { 00:20:20.253 "trtype": "TCP", 00:20:20.253 "adrfam": "IPv4", 00:20:20.253 "traddr": "10.0.0.1", 00:20:20.253 "trsvcid": "44068" 00:20:20.253 }, 00:20:20.253 "auth": { 00:20:20.253 "state": "completed", 00:20:20.253 "digest": "sha384", 00:20:20.253 "dhgroup": "ffdhe8192" 00:20:20.253 } 00:20:20.253 } 00:20:20.253 ]' 00:20:20.253 09:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.253 09:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.253 09:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.253 09:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:20.253 09:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.253 09:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.253 09:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.253 09:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.512 09:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjYyOGZiNTAyZTBhZDYxMzEwOGI4Yzg5ZmNkMTYzMDfXT6l9: --dhchap-ctrl-secret DHHC-1:02:NDAzNTFiOWUyOGFhNjhlNGI5NjMxZGFjZjc3OTczMjAyN2Q0YjdmMWYyMWRiOTQ0QIEdSw==: 00:20:21.080 09:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.080 09:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:21.080 09:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.080 09:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.080 09:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.080 09:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.080 09:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:21.080 09:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:21.340 09:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:21.340 09:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.340 09:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:21.340 09:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:21.340 09:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:21.340 09:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.340 09:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.340 09:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.340 09:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.340 09:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.340 09:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.340 09:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.909 00:20:21.909 09:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.909 09:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.909 09:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.909 09:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.909 09:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.909 09:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.909 09:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.169 09:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.169 09:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.169 { 00:20:22.169 "cntlid": 93, 00:20:22.169 "qid": 0, 00:20:22.169 "state": "enabled", 00:20:22.169 "thread": "nvmf_tgt_poll_group_000", 00:20:22.169 "listen_address": { 00:20:22.169 "trtype": "TCP", 00:20:22.169 "adrfam": "IPv4", 00:20:22.169 "traddr": "10.0.0.2", 00:20:22.169 "trsvcid": "4420" 00:20:22.169 }, 00:20:22.169 "peer_address": { 00:20:22.169 "trtype": "TCP", 00:20:22.169 "adrfam": "IPv4", 00:20:22.169 "traddr": "10.0.0.1", 00:20:22.169 "trsvcid": "44102" 00:20:22.169 }, 00:20:22.169 "auth": { 00:20:22.169 "state": "completed", 00:20:22.169 "digest": "sha384", 00:20:22.169 "dhgroup": "ffdhe8192" 00:20:22.169 } 00:20:22.169 } 00:20:22.169 ]' 00:20:22.169 09:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.169 09:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.169 09:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.169 09:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:22.169 09:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.169 09:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.169 09:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.169 09:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.429 09:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZDJlYjgxZGUzMmRlNjFiM2MyMWZmNzlkYWIzNTBmYWU3NmNkNmEwMjU5MGZjOTAyb6nGcQ==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5NGU3OGIyYzBhMWYzY2FhMTg3MGVhZmFlNjhjMGINfRK5: 00:20:22.999 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.999 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:22.999 09:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.999 09:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.999 09:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.999 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.999 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:22.999 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:23.260 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:23.260 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.260 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.260 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:23.260 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:23.260 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.260 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:23.260 09:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.260 09:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.260 09:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.260 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.260 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.831 00:20:23.831 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.831 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.832 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.832 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.832 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.832 09:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.832 09:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.832 09:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.832 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.832 { 00:20:23.832 "cntlid": 95, 00:20:23.832 "qid": 0, 00:20:23.832 "state": "enabled", 00:20:23.832 "thread": "nvmf_tgt_poll_group_000", 00:20:23.832 "listen_address": { 00:20:23.832 "trtype": "TCP", 00:20:23.832 "adrfam": "IPv4", 00:20:23.832 "traddr": "10.0.0.2", 00:20:23.832 "trsvcid": "4420" 00:20:23.832 }, 00:20:23.832 "peer_address": { 00:20:23.832 "trtype": "TCP", 00:20:23.832 "adrfam": "IPv4", 00:20:23.832 "traddr": "10.0.0.1", 00:20:23.832 "trsvcid": "44124" 00:20:23.832 }, 00:20:23.832 "auth": { 00:20:23.832 "state": "completed", 00:20:23.832 "digest": "sha384", 00:20:23.832 "dhgroup": "ffdhe8192" 00:20:23.832 } 00:20:23.832 } 00:20:23.832 ]' 00:20:23.832 09:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.091 09:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.091 09:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.091 09:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.091 09:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.091 09:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.091 09:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.091 09:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.351 09:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:MDc1YTFkZmUzMjEzODgxNjFlNDg5OGNkOGJiYzQ5ZTUyOTQ3NTc0MzdiMjNjMDgzMzhjYTFkMjRmNzBhY2NhYq3ofEE=: 00:20:24.920 09:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.920 09:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:24.920 09:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.920 09:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.920 09:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.920 09:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:24.920 09:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.920 09:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.920 09:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:24.920 09:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:24.920 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:24.920 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.920 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:24.920 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:24.920 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:24.920 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.920 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.920 09:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.920 09:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.920 09:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.920 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.920 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.179 00:20:25.179 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.179 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.179 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.440 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.440 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.440 09:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.440 09:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.440 09:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.440 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.440 { 00:20:25.440 "cntlid": 97, 00:20:25.440 "qid": 0, 00:20:25.440 "state": "enabled", 00:20:25.440 "thread": "nvmf_tgt_poll_group_000", 00:20:25.440 "listen_address": { 00:20:25.440 "trtype": "TCP", 00:20:25.440 "adrfam": "IPv4", 00:20:25.440 "traddr": "10.0.0.2", 00:20:25.440 "trsvcid": "4420" 00:20:25.440 }, 00:20:25.440 "peer_address": { 00:20:25.440 "trtype": "TCP", 00:20:25.440 "adrfam": "IPv4", 00:20:25.440 "traddr": "10.0.0.1", 00:20:25.440 "trsvcid": "44162" 00:20:25.440 }, 00:20:25.440 "auth": { 00:20:25.440 "state": "completed", 00:20:25.440 "digest": "sha512", 00:20:25.440 "dhgroup": "null" 00:20:25.440 } 00:20:25.440 } 00:20:25.440 ]' 00:20:25.440 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.440 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.440 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.440 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:25.440 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.440 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.440 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.440 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.700 09:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTdlOGRjMTUxZWQ3OTg2MjRjZTQwNjg5ZmZmZGY1ZDJiYjVlMGVhN2M5Y2JjNzM2wwp8yw==: --dhchap-ctrl-secret DHHC-1:03:YTFmZWQxOTU0MzEwZDY0OGE1ZTQzOGQ0M2Q2NmY0MmJhZWZmNDU2NDA5MzBkODAzZTgzOWZjMjM1ZDI3NWI4ZksrVuA=: 00:20:26.300 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.300 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:26.300 09:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.300 09:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.300 09:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.300 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.300 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:26.300 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:26.591 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:26.591 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.591 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:26.591 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:26.591 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:26.591 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.591 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.591 09:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.591 09:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.591 09:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.591 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.591 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.591 00:20:26.591 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.591 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.591 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.850 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.850 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.850 09:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.850 09:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.850 09:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.850 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.850 { 00:20:26.850 "cntlid": 99, 00:20:26.850 "qid": 0, 00:20:26.850 "state": "enabled", 00:20:26.850 "thread": "nvmf_tgt_poll_group_000", 00:20:26.850 "listen_address": { 00:20:26.850 "trtype": "TCP", 00:20:26.850 "adrfam": "IPv4", 00:20:26.850 "traddr": "10.0.0.2", 00:20:26.850 "trsvcid": "4420" 00:20:26.850 }, 00:20:26.850 "peer_address": { 00:20:26.850 "trtype": "TCP", 00:20:26.850 "adrfam": "IPv4", 00:20:26.850 "traddr": "10.0.0.1", 00:20:26.850 "trsvcid": "43630" 00:20:26.850 }, 00:20:26.850 "auth": { 00:20:26.850 "state": "completed", 00:20:26.850 "digest": "sha512", 00:20:26.850 "dhgroup": "null" 00:20:26.850 } 00:20:26.850 } 00:20:26.850 ]' 00:20:26.850 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.850 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.850 09:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.850 09:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:26.850 09:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.110 09:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.110 09:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.110 09:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.110 09:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjYyOGZiNTAyZTBhZDYxMzEwOGI4Yzg5ZmNkMTYzMDfXT6l9: --dhchap-ctrl-secret DHHC-1:02:NDAzNTFiOWUyOGFhNjhlNGI5NjMxZGFjZjc3OTczMjAyN2Q0YjdmMWYyMWRiOTQ0QIEdSw==: 00:20:28.051 09:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.052 09:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:28.052 09:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.052 09:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.052 09:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.052 09:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.052 09:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:28.052 09:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:28.052 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:28.052 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.052 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:28.052 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:28.052 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:28.052 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.052 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.052 09:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.052 09:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.052 09:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.052 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.052 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.313 00:20:28.313 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.313 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.313 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.313 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.313 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.313 09:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.313 09:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.313 09:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.313 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.313 { 00:20:28.313 "cntlid": 101, 00:20:28.313 "qid": 0, 00:20:28.313 "state": "enabled", 00:20:28.313 "thread": "nvmf_tgt_poll_group_000", 00:20:28.313 "listen_address": { 00:20:28.313 "trtype": "TCP", 00:20:28.313 "adrfam": "IPv4", 00:20:28.313 "traddr": "10.0.0.2", 00:20:28.313 "trsvcid": "4420" 00:20:28.313 }, 00:20:28.313 "peer_address": { 00:20:28.313 "trtype": "TCP", 00:20:28.313 "adrfam": "IPv4", 00:20:28.313 "traddr": "10.0.0.1", 00:20:28.313 "trsvcid": "43646" 00:20:28.313 }, 00:20:28.313 "auth": { 00:20:28.313 "state": "completed", 00:20:28.313 "digest": "sha512", 00:20:28.313 "dhgroup": "null" 00:20:28.313 } 00:20:28.313 } 00:20:28.313 ]' 00:20:28.313 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.575 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:28.575 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.575 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:28.575 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.575 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.575 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.575 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.836 09:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZDJlYjgxZGUzMmRlNjFiM2MyMWZmNzlkYWIzNTBmYWU3NmNkNmEwMjU5MGZjOTAyb6nGcQ==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5NGU3OGIyYzBhMWYzY2FhMTg3MGVhZmFlNjhjMGINfRK5: 00:20:29.407 09:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.407 09:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:29.407 09:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.407 09:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.407 09:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.407 09:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.407 09:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:29.407 09:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:29.668 09:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:29.668 09:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.668 09:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:29.668 09:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:29.668 09:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:29.668 09:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.668 09:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:29.668 09:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.668 09:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.668 09:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.668 09:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.668 09:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.929 00:20:29.929 09:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.929 09:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.929 09:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.929 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.929 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.929 09:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.929 09:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.929 09:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.929 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.929 { 00:20:29.929 "cntlid": 103, 00:20:29.929 "qid": 0, 00:20:29.929 "state": "enabled", 00:20:29.929 "thread": "nvmf_tgt_poll_group_000", 00:20:29.929 "listen_address": { 00:20:29.929 "trtype": "TCP", 00:20:29.929 "adrfam": "IPv4", 00:20:29.929 "traddr": "10.0.0.2", 00:20:29.929 "trsvcid": "4420" 00:20:29.929 }, 00:20:29.929 "peer_address": { 00:20:29.929 "trtype": "TCP", 00:20:29.929 "adrfam": "IPv4", 00:20:29.929 "traddr": "10.0.0.1", 00:20:29.929 "trsvcid": "43682" 00:20:29.929 }, 00:20:29.929 "auth": { 00:20:29.929 "state": "completed", 00:20:29.929 "digest": "sha512", 00:20:29.929 "dhgroup": "null" 00:20:29.929 } 00:20:29.929 } 00:20:29.929 ]' 00:20:29.929 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.929 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:29.929 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.191 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:30.191 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.191 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.191 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.191 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.191 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:MDc1YTFkZmUzMjEzODgxNjFlNDg5OGNkOGJiYzQ5ZTUyOTQ3NTc0MzdiMjNjMDgzMzhjYTFkMjRmNzBhY2NhYq3ofEE=: 00:20:30.764 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.764 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:30.764 09:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.764 09:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.764 09:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.026 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.026 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.026 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:31.026 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:31.026 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:31.026 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.026 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:31.026 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:31.026 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:31.026 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.026 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.026 09:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.026 09:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.026 09:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.026 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.026 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.288 00:20:31.288 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.288 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.288 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.549 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.549 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.549 09:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.549 09:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.549 09:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.549 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.549 { 00:20:31.549 "cntlid": 105, 00:20:31.549 "qid": 0, 00:20:31.549 "state": "enabled", 00:20:31.549 "thread": "nvmf_tgt_poll_group_000", 00:20:31.549 "listen_address": { 00:20:31.549 "trtype": "TCP", 00:20:31.549 "adrfam": "IPv4", 00:20:31.549 "traddr": "10.0.0.2", 00:20:31.549 "trsvcid": "4420" 00:20:31.549 }, 00:20:31.549 "peer_address": { 00:20:31.549 "trtype": "TCP", 00:20:31.549 "adrfam": "IPv4", 00:20:31.549 "traddr": "10.0.0.1", 00:20:31.549 "trsvcid": "43702" 00:20:31.549 }, 00:20:31.549 "auth": { 00:20:31.549 "state": "completed", 00:20:31.549 "digest": "sha512", 00:20:31.549 "dhgroup": "ffdhe2048" 00:20:31.549 } 00:20:31.549 } 00:20:31.549 ]' 00:20:31.549 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.549 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.549 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.549 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:31.549 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.549 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.549 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.549 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.810 09:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTdlOGRjMTUxZWQ3OTg2MjRjZTQwNjg5ZmZmZGY1ZDJiYjVlMGVhN2M5Y2JjNzM2wwp8yw==: --dhchap-ctrl-secret DHHC-1:03:YTFmZWQxOTU0MzEwZDY0OGE1ZTQzOGQ0M2Q2NmY0MmJhZWZmNDU2NDA5MzBkODAzZTgzOWZjMjM1ZDI3NWI4ZksrVuA=: 00:20:32.411 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.411 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:32.411 09:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.411 09:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.411 09:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.411 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.411 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:32.411 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:32.411 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:32.411 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.411 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:32.411 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:32.411 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:32.411 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.411 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.411 09:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.411 09:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.412 09:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.412 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.412 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.673 00:20:32.673 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.673 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.673 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.933 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.933 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.933 09:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.933 09:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.933 09:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.933 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.933 { 00:20:32.933 "cntlid": 107, 00:20:32.933 "qid": 0, 00:20:32.933 "state": "enabled", 00:20:32.933 "thread": "nvmf_tgt_poll_group_000", 00:20:32.933 "listen_address": { 00:20:32.933 "trtype": "TCP", 00:20:32.933 "adrfam": "IPv4", 00:20:32.933 "traddr": "10.0.0.2", 00:20:32.933 "trsvcid": "4420" 00:20:32.933 }, 00:20:32.933 "peer_address": { 00:20:32.933 "trtype": "TCP", 00:20:32.933 "adrfam": "IPv4", 00:20:32.933 "traddr": "10.0.0.1", 00:20:32.933 "trsvcid": "43732" 00:20:32.933 }, 00:20:32.933 "auth": { 00:20:32.933 "state": "completed", 00:20:32.933 "digest": "sha512", 00:20:32.933 "dhgroup": "ffdhe2048" 00:20:32.933 } 00:20:32.933 } 00:20:32.933 ]' 00:20:32.933 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.933 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.933 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.933 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:32.933 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.193 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.193 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.193 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.193 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjYyOGZiNTAyZTBhZDYxMzEwOGI4Yzg5ZmNkMTYzMDfXT6l9: --dhchap-ctrl-secret DHHC-1:02:NDAzNTFiOWUyOGFhNjhlNGI5NjMxZGFjZjc3OTczMjAyN2Q0YjdmMWYyMWRiOTQ0QIEdSw==: 00:20:34.135 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.135 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:34.135 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.135 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.135 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.135 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.135 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:34.135 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:34.135 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:34.135 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.135 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:34.135 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:34.135 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:34.135 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.135 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.135 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.135 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.135 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.135 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.135 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.395 00:20:34.395 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.395 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.395 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.395 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.395 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.395 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.395 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.395 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.395 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.395 { 00:20:34.395 "cntlid": 109, 00:20:34.395 "qid": 0, 00:20:34.395 "state": "enabled", 00:20:34.395 "thread": "nvmf_tgt_poll_group_000", 00:20:34.395 "listen_address": { 00:20:34.395 "trtype": "TCP", 00:20:34.395 "adrfam": "IPv4", 00:20:34.395 "traddr": "10.0.0.2", 00:20:34.395 "trsvcid": "4420" 00:20:34.395 }, 00:20:34.395 "peer_address": { 00:20:34.395 "trtype": "TCP", 00:20:34.395 "adrfam": "IPv4", 00:20:34.395 "traddr": "10.0.0.1", 00:20:34.395 "trsvcid": "43764" 00:20:34.395 }, 00:20:34.395 "auth": { 00:20:34.395 "state": "completed", 00:20:34.395 "digest": "sha512", 00:20:34.395 "dhgroup": "ffdhe2048" 00:20:34.395 } 00:20:34.395 } 00:20:34.395 ]' 00:20:34.396 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.656 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:34.656 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.656 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:34.656 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.656 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.656 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.656 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.916 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZDJlYjgxZGUzMmRlNjFiM2MyMWZmNzlkYWIzNTBmYWU3NmNkNmEwMjU5MGZjOTAyb6nGcQ==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5NGU3OGIyYzBhMWYzY2FhMTg3MGVhZmFlNjhjMGINfRK5: 00:20:35.487 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.487 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:35.487 09:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.487 09:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.487 09:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.487 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.487 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:35.487 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:35.747 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:35.747 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.747 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:35.747 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:35.747 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:35.747 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.747 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:35.747 09:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.747 09:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.747 09:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.747 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.747 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.747 00:20:36.007 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.007 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.007 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.007 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.007 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.007 09:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.007 09:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.007 09:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.007 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.007 { 00:20:36.007 "cntlid": 111, 00:20:36.007 "qid": 0, 00:20:36.007 "state": "enabled", 00:20:36.007 "thread": "nvmf_tgt_poll_group_000", 00:20:36.007 "listen_address": { 00:20:36.007 "trtype": "TCP", 00:20:36.007 "adrfam": "IPv4", 00:20:36.007 "traddr": "10.0.0.2", 00:20:36.007 "trsvcid": "4420" 00:20:36.007 }, 00:20:36.007 "peer_address": { 00:20:36.007 "trtype": "TCP", 00:20:36.007 "adrfam": "IPv4", 00:20:36.007 "traddr": "10.0.0.1", 00:20:36.007 "trsvcid": "43780" 00:20:36.007 }, 00:20:36.007 "auth": { 00:20:36.007 "state": "completed", 00:20:36.007 "digest": "sha512", 00:20:36.007 "dhgroup": "ffdhe2048" 00:20:36.007 } 00:20:36.007 } 00:20:36.007 ]' 00:20:36.007 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.007 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.007 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.272 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:36.272 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.272 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.272 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.272 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.273 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:MDc1YTFkZmUzMjEzODgxNjFlNDg5OGNkOGJiYzQ5ZTUyOTQ3NTc0MzdiMjNjMDgzMzhjYTFkMjRmNzBhY2NhYq3ofEE=: 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.215 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.476 00:20:37.476 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.476 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.476 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.736 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.736 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.736 09:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.736 09:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.736 09:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.736 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.736 { 00:20:37.736 "cntlid": 113, 00:20:37.736 "qid": 0, 00:20:37.736 "state": "enabled", 00:20:37.736 "thread": "nvmf_tgt_poll_group_000", 00:20:37.736 "listen_address": { 00:20:37.736 "trtype": "TCP", 00:20:37.736 "adrfam": "IPv4", 00:20:37.736 "traddr": "10.0.0.2", 00:20:37.736 "trsvcid": "4420" 00:20:37.736 }, 00:20:37.736 "peer_address": { 00:20:37.736 "trtype": "TCP", 00:20:37.736 "adrfam": "IPv4", 00:20:37.736 "traddr": "10.0.0.1", 00:20:37.736 "trsvcid": "60014" 00:20:37.736 }, 00:20:37.736 "auth": { 00:20:37.736 "state": "completed", 00:20:37.736 "digest": "sha512", 00:20:37.736 "dhgroup": "ffdhe3072" 00:20:37.736 } 00:20:37.736 } 00:20:37.736 ]' 00:20:37.736 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.736 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.736 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.736 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:37.736 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.736 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.736 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.736 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.997 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTdlOGRjMTUxZWQ3OTg2MjRjZTQwNjg5ZmZmZGY1ZDJiYjVlMGVhN2M5Y2JjNzM2wwp8yw==: --dhchap-ctrl-secret DHHC-1:03:YTFmZWQxOTU0MzEwZDY0OGE1ZTQzOGQ0M2Q2NmY0MmJhZWZmNDU2NDA5MzBkODAzZTgzOWZjMjM1ZDI3NWI4ZksrVuA=: 00:20:38.566 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.566 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:38.566 09:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.566 09:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.566 09:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.566 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.566 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:38.566 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:38.827 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:38.827 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.827 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:38.827 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:38.827 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:38.827 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.827 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.827 09:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.827 09:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.827 09:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.827 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.827 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.088 00:20:39.088 09:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.088 09:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.088 09:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.348 09:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.348 09:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.348 09:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.348 09:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.348 09:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.348 09:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.348 { 00:20:39.348 "cntlid": 115, 00:20:39.348 "qid": 0, 00:20:39.348 "state": "enabled", 00:20:39.348 "thread": "nvmf_tgt_poll_group_000", 00:20:39.349 "listen_address": { 00:20:39.349 "trtype": "TCP", 00:20:39.349 "adrfam": "IPv4", 00:20:39.349 "traddr": "10.0.0.2", 00:20:39.349 "trsvcid": "4420" 00:20:39.349 }, 00:20:39.349 "peer_address": { 00:20:39.349 "trtype": "TCP", 00:20:39.349 "adrfam": "IPv4", 00:20:39.349 "traddr": "10.0.0.1", 00:20:39.349 "trsvcid": "60056" 00:20:39.349 }, 00:20:39.349 "auth": { 00:20:39.349 "state": "completed", 00:20:39.349 "digest": "sha512", 00:20:39.349 "dhgroup": "ffdhe3072" 00:20:39.349 } 00:20:39.349 } 00:20:39.349 ]' 00:20:39.349 09:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.349 09:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.349 09:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.349 09:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:39.349 09:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.349 09:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.349 09:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.349 09:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.610 09:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjYyOGZiNTAyZTBhZDYxMzEwOGI4Yzg5ZmNkMTYzMDfXT6l9: --dhchap-ctrl-secret DHHC-1:02:NDAzNTFiOWUyOGFhNjhlNGI5NjMxZGFjZjc3OTczMjAyN2Q0YjdmMWYyMWRiOTQ0QIEdSw==: 00:20:40.180 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.180 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:40.180 09:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.180 09:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.180 09:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.180 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.180 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:40.180 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:40.441 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:40.441 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.441 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:40.441 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:40.441 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:40.441 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.441 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.441 09:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.441 09:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.441 09:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.441 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.441 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.701 00:20:40.701 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.701 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.701 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.701 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.701 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.701 09:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.701 09:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.701 09:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.701 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.701 { 00:20:40.701 "cntlid": 117, 00:20:40.701 "qid": 0, 00:20:40.701 "state": "enabled", 00:20:40.701 "thread": "nvmf_tgt_poll_group_000", 00:20:40.701 "listen_address": { 00:20:40.701 "trtype": "TCP", 00:20:40.701 "adrfam": "IPv4", 00:20:40.701 "traddr": "10.0.0.2", 00:20:40.701 "trsvcid": "4420" 00:20:40.701 }, 00:20:40.701 "peer_address": { 00:20:40.701 "trtype": "TCP", 00:20:40.701 "adrfam": "IPv4", 00:20:40.701 "traddr": "10.0.0.1", 00:20:40.701 "trsvcid": "60080" 00:20:40.701 }, 00:20:40.701 "auth": { 00:20:40.701 "state": "completed", 00:20:40.701 "digest": "sha512", 00:20:40.701 "dhgroup": "ffdhe3072" 00:20:40.701 } 00:20:40.701 } 00:20:40.701 ]' 00:20:40.701 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.962 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:40.962 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.962 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:40.962 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.962 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.962 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.962 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.222 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZDJlYjgxZGUzMmRlNjFiM2MyMWZmNzlkYWIzNTBmYWU3NmNkNmEwMjU5MGZjOTAyb6nGcQ==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5NGU3OGIyYzBhMWYzY2FhMTg3MGVhZmFlNjhjMGINfRK5: 00:20:41.793 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.793 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:41.793 09:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.793 09:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.793 09:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.793 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.793 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:41.793 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:42.053 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:42.053 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.054 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:42.054 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:42.054 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:42.054 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.054 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:42.054 09:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.054 09:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.054 09:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.054 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.054 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.314 00:20:42.314 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.314 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.314 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.314 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.314 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.314 09:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.314 09:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.314 09:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.314 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.314 { 00:20:42.314 "cntlid": 119, 00:20:42.314 "qid": 0, 00:20:42.314 "state": "enabled", 00:20:42.314 "thread": "nvmf_tgt_poll_group_000", 00:20:42.314 "listen_address": { 00:20:42.314 "trtype": "TCP", 00:20:42.314 "adrfam": "IPv4", 00:20:42.314 "traddr": "10.0.0.2", 00:20:42.314 "trsvcid": "4420" 00:20:42.314 }, 00:20:42.314 "peer_address": { 00:20:42.314 "trtype": "TCP", 00:20:42.314 "adrfam": "IPv4", 00:20:42.314 "traddr": "10.0.0.1", 00:20:42.314 "trsvcid": "60112" 00:20:42.314 }, 00:20:42.314 "auth": { 00:20:42.314 "state": "completed", 00:20:42.314 "digest": "sha512", 00:20:42.314 "dhgroup": "ffdhe3072" 00:20:42.314 } 00:20:42.314 } 00:20:42.314 ]' 00:20:42.314 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.314 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.314 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.574 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:42.574 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.574 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.574 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.574 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.574 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:MDc1YTFkZmUzMjEzODgxNjFlNDg5OGNkOGJiYzQ5ZTUyOTQ3NTc0MzdiMjNjMDgzMzhjYTFkMjRmNzBhY2NhYq3ofEE=: 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.514 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.774 00:20:43.774 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.774 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.774 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.034 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.034 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.034 09:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.034 09:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.034 09:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.034 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.034 { 00:20:44.034 "cntlid": 121, 00:20:44.034 "qid": 0, 00:20:44.034 "state": "enabled", 00:20:44.034 "thread": "nvmf_tgt_poll_group_000", 00:20:44.034 "listen_address": { 00:20:44.034 "trtype": "TCP", 00:20:44.034 "adrfam": "IPv4", 00:20:44.034 "traddr": "10.0.0.2", 00:20:44.034 "trsvcid": "4420" 00:20:44.034 }, 00:20:44.034 "peer_address": { 00:20:44.034 "trtype": "TCP", 00:20:44.034 "adrfam": "IPv4", 00:20:44.034 "traddr": "10.0.0.1", 00:20:44.034 "trsvcid": "60140" 00:20:44.034 }, 00:20:44.034 "auth": { 00:20:44.034 "state": "completed", 00:20:44.034 "digest": "sha512", 00:20:44.034 "dhgroup": "ffdhe4096" 00:20:44.034 } 00:20:44.034 } 00:20:44.034 ]' 00:20:44.034 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.034 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.034 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.034 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:44.034 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.035 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.035 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.035 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.295 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTdlOGRjMTUxZWQ3OTg2MjRjZTQwNjg5ZmZmZGY1ZDJiYjVlMGVhN2M5Y2JjNzM2wwp8yw==: --dhchap-ctrl-secret DHHC-1:03:YTFmZWQxOTU0MzEwZDY0OGE1ZTQzOGQ0M2Q2NmY0MmJhZWZmNDU2NDA5MzBkODAzZTgzOWZjMjM1ZDI3NWI4ZksrVuA=: 00:20:44.866 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.866 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:44.866 09:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.866 09:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.866 09:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.866 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.866 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:44.867 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:45.127 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:45.127 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.127 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:45.127 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:45.127 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:45.127 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.127 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.127 09:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.127 09:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.127 09:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.127 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.127 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.387 00:20:45.387 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.387 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.387 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.387 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.387 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.387 09:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.387 09:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.387 09:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.387 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.387 { 00:20:45.387 "cntlid": 123, 00:20:45.387 "qid": 0, 00:20:45.387 "state": "enabled", 00:20:45.387 "thread": "nvmf_tgt_poll_group_000", 00:20:45.387 "listen_address": { 00:20:45.387 "trtype": "TCP", 00:20:45.387 "adrfam": "IPv4", 00:20:45.387 "traddr": "10.0.0.2", 00:20:45.387 "trsvcid": "4420" 00:20:45.387 }, 00:20:45.387 "peer_address": { 00:20:45.387 "trtype": "TCP", 00:20:45.387 "adrfam": "IPv4", 00:20:45.387 "traddr": "10.0.0.1", 00:20:45.387 "trsvcid": "60162" 00:20:45.387 }, 00:20:45.387 "auth": { 00:20:45.387 "state": "completed", 00:20:45.387 "digest": "sha512", 00:20:45.387 "dhgroup": "ffdhe4096" 00:20:45.387 } 00:20:45.387 } 00:20:45.387 ]' 00:20:45.387 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.647 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.647 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.647 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:45.647 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.647 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.647 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.648 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.973 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjYyOGZiNTAyZTBhZDYxMzEwOGI4Yzg5ZmNkMTYzMDfXT6l9: --dhchap-ctrl-secret DHHC-1:02:NDAzNTFiOWUyOGFhNjhlNGI5NjMxZGFjZjc3OTczMjAyN2Q0YjdmMWYyMWRiOTQ0QIEdSw==: 00:20:46.251 09:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.251 09:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:46.251 09:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.251 09:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.509 09:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.509 09:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.509 09:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:46.509 09:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:46.509 09:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:46.509 09:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.509 09:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:46.509 09:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:46.509 09:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:46.509 09:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.509 09:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.509 09:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.509 09:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.509 09:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.509 09:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.509 09:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.769 00:20:46.769 09:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.769 09:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.769 09:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.029 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.029 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.029 09:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.029 09:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.029 09:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.029 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.029 { 00:20:47.029 "cntlid": 125, 00:20:47.029 "qid": 0, 00:20:47.029 "state": "enabled", 00:20:47.029 "thread": "nvmf_tgt_poll_group_000", 00:20:47.029 "listen_address": { 00:20:47.029 "trtype": "TCP", 00:20:47.029 "adrfam": "IPv4", 00:20:47.029 "traddr": "10.0.0.2", 00:20:47.029 "trsvcid": "4420" 00:20:47.029 }, 00:20:47.029 "peer_address": { 00:20:47.029 "trtype": "TCP", 00:20:47.029 "adrfam": "IPv4", 00:20:47.029 "traddr": "10.0.0.1", 00:20:47.029 "trsvcid": "35746" 00:20:47.029 }, 00:20:47.029 "auth": { 00:20:47.029 "state": "completed", 00:20:47.029 "digest": "sha512", 00:20:47.029 "dhgroup": "ffdhe4096" 00:20:47.029 } 00:20:47.029 } 00:20:47.029 ]' 00:20:47.029 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.029 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.029 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.029 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:47.029 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.029 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.029 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.029 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.289 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZDJlYjgxZGUzMmRlNjFiM2MyMWZmNzlkYWIzNTBmYWU3NmNkNmEwMjU5MGZjOTAyb6nGcQ==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5NGU3OGIyYzBhMWYzY2FhMTg3MGVhZmFlNjhjMGINfRK5: 00:20:47.860 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.860 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:47.860 09:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.860 09:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.860 09:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.860 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.860 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:47.860 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:48.120 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:48.120 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.120 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:48.120 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:48.120 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:48.120 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.120 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:48.120 09:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.120 09:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.120 09:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.120 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:48.120 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:48.380 00:20:48.380 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.380 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.380 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.641 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.641 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.641 09:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.641 09:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.641 09:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.641 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.641 { 00:20:48.641 "cntlid": 127, 00:20:48.641 "qid": 0, 00:20:48.641 "state": "enabled", 00:20:48.641 "thread": "nvmf_tgt_poll_group_000", 00:20:48.641 "listen_address": { 00:20:48.641 "trtype": "TCP", 00:20:48.641 "adrfam": "IPv4", 00:20:48.641 "traddr": "10.0.0.2", 00:20:48.641 "trsvcid": "4420" 00:20:48.641 }, 00:20:48.641 "peer_address": { 00:20:48.641 "trtype": "TCP", 00:20:48.641 "adrfam": "IPv4", 00:20:48.641 "traddr": "10.0.0.1", 00:20:48.641 "trsvcid": "35778" 00:20:48.641 }, 00:20:48.641 "auth": { 00:20:48.641 "state": "completed", 00:20:48.641 "digest": "sha512", 00:20:48.641 "dhgroup": "ffdhe4096" 00:20:48.641 } 00:20:48.641 } 00:20:48.641 ]' 00:20:48.641 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.641 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.641 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.641 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:48.641 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.641 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.641 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.641 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.902 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:MDc1YTFkZmUzMjEzODgxNjFlNDg5OGNkOGJiYzQ5ZTUyOTQ3NTc0MzdiMjNjMDgzMzhjYTFkMjRmNzBhY2NhYq3ofEE=: 00:20:49.472 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.472 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:49.472 09:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.472 09:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.733 09:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.733 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.733 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.733 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:49.733 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:49.733 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:49.733 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.733 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:49.733 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:49.733 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:49.733 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.733 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.733 09:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.733 09:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.733 09:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.733 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.733 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.993 00:20:49.993 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.993 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.993 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.254 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.254 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.254 09:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.254 09:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.254 09:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.254 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.254 { 00:20:50.254 "cntlid": 129, 00:20:50.254 "qid": 0, 00:20:50.254 "state": "enabled", 00:20:50.254 "thread": "nvmf_tgt_poll_group_000", 00:20:50.254 "listen_address": { 00:20:50.254 "trtype": "TCP", 00:20:50.254 "adrfam": "IPv4", 00:20:50.254 "traddr": "10.0.0.2", 00:20:50.254 "trsvcid": "4420" 00:20:50.254 }, 00:20:50.254 "peer_address": { 00:20:50.254 "trtype": "TCP", 00:20:50.254 "adrfam": "IPv4", 00:20:50.254 "traddr": "10.0.0.1", 00:20:50.254 "trsvcid": "35806" 00:20:50.254 }, 00:20:50.254 "auth": { 00:20:50.254 "state": "completed", 00:20:50.254 "digest": "sha512", 00:20:50.254 "dhgroup": "ffdhe6144" 00:20:50.254 } 00:20:50.254 } 00:20:50.254 ]' 00:20:50.254 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.254 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.254 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.514 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:50.514 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.514 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.514 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.514 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.514 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTdlOGRjMTUxZWQ3OTg2MjRjZTQwNjg5ZmZmZGY1ZDJiYjVlMGVhN2M5Y2JjNzM2wwp8yw==: --dhchap-ctrl-secret DHHC-1:03:YTFmZWQxOTU0MzEwZDY0OGE1ZTQzOGQ0M2Q2NmY0MmJhZWZmNDU2NDA5MzBkODAzZTgzOWZjMjM1ZDI3NWI4ZksrVuA=: 00:20:51.084 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.084 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:51.084 09:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.084 09:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.084 09:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.084 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.084 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:51.084 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:51.345 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:51.345 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.345 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:51.345 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:51.345 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:51.345 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.345 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.345 09:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.345 09:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.345 09:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.345 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.345 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.605 00:20:51.605 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.605 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.605 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.864 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.864 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.864 09:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.864 09:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.864 09:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.864 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.864 { 00:20:51.864 "cntlid": 131, 00:20:51.864 "qid": 0, 00:20:51.864 "state": "enabled", 00:20:51.864 "thread": "nvmf_tgt_poll_group_000", 00:20:51.864 "listen_address": { 00:20:51.864 "trtype": "TCP", 00:20:51.864 "adrfam": "IPv4", 00:20:51.864 "traddr": "10.0.0.2", 00:20:51.864 "trsvcid": "4420" 00:20:51.864 }, 00:20:51.864 "peer_address": { 00:20:51.864 "trtype": "TCP", 00:20:51.864 "adrfam": "IPv4", 00:20:51.864 "traddr": "10.0.0.1", 00:20:51.864 "trsvcid": "35830" 00:20:51.864 }, 00:20:51.864 "auth": { 00:20:51.864 "state": "completed", 00:20:51.864 "digest": "sha512", 00:20:51.864 "dhgroup": "ffdhe6144" 00:20:51.864 } 00:20:51.864 } 00:20:51.864 ]' 00:20:51.864 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.864 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.864 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.864 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:51.864 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.864 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.864 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.864 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.124 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjYyOGZiNTAyZTBhZDYxMzEwOGI4Yzg5ZmNkMTYzMDfXT6l9: --dhchap-ctrl-secret DHHC-1:02:NDAzNTFiOWUyOGFhNjhlNGI5NjMxZGFjZjc3OTczMjAyN2Q0YjdmMWYyMWRiOTQ0QIEdSw==: 00:20:53.064 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.064 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:53.064 09:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.064 09:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.064 09:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.064 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.064 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:53.064 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:53.064 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:53.064 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.064 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:53.064 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:53.064 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:53.065 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.065 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.065 09:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.065 09:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.065 09:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.065 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.065 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.324 00:20:53.324 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.324 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.324 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.584 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.584 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.584 09:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.584 09:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.584 09:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.584 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.584 { 00:20:53.584 "cntlid": 133, 00:20:53.584 "qid": 0, 00:20:53.584 "state": "enabled", 00:20:53.584 "thread": "nvmf_tgt_poll_group_000", 00:20:53.584 "listen_address": { 00:20:53.584 "trtype": "TCP", 00:20:53.584 "adrfam": "IPv4", 00:20:53.584 "traddr": "10.0.0.2", 00:20:53.584 "trsvcid": "4420" 00:20:53.584 }, 00:20:53.584 "peer_address": { 00:20:53.584 "trtype": "TCP", 00:20:53.584 "adrfam": "IPv4", 00:20:53.584 "traddr": "10.0.0.1", 00:20:53.584 "trsvcid": "35866" 00:20:53.584 }, 00:20:53.584 "auth": { 00:20:53.584 "state": "completed", 00:20:53.584 "digest": "sha512", 00:20:53.584 "dhgroup": "ffdhe6144" 00:20:53.584 } 00:20:53.584 } 00:20:53.584 ]' 00:20:53.584 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.584 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.584 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.584 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:53.584 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.584 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.584 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.584 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.844 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZDJlYjgxZGUzMmRlNjFiM2MyMWZmNzlkYWIzNTBmYWU3NmNkNmEwMjU5MGZjOTAyb6nGcQ==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5NGU3OGIyYzBhMWYzY2FhMTg3MGVhZmFlNjhjMGINfRK5: 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.784 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.045 00:20:55.045 09:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.045 09:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.045 09:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.304 09:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.304 09:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.304 09:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.304 09:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.304 09:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.304 09:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.304 { 00:20:55.304 "cntlid": 135, 00:20:55.304 "qid": 0, 00:20:55.304 "state": "enabled", 00:20:55.304 "thread": "nvmf_tgt_poll_group_000", 00:20:55.304 "listen_address": { 00:20:55.304 "trtype": "TCP", 00:20:55.304 "adrfam": "IPv4", 00:20:55.304 "traddr": "10.0.0.2", 00:20:55.304 "trsvcid": "4420" 00:20:55.304 }, 00:20:55.304 "peer_address": { 00:20:55.304 "trtype": "TCP", 00:20:55.304 "adrfam": "IPv4", 00:20:55.304 "traddr": "10.0.0.1", 00:20:55.304 "trsvcid": "35890" 00:20:55.304 }, 00:20:55.304 "auth": { 00:20:55.304 "state": "completed", 00:20:55.304 "digest": "sha512", 00:20:55.305 "dhgroup": "ffdhe6144" 00:20:55.305 } 00:20:55.305 } 00:20:55.305 ]' 00:20:55.305 09:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.305 09:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.305 09:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.305 09:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:55.305 09:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.305 09:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.305 09:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.305 09:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.564 09:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:MDc1YTFkZmUzMjEzODgxNjFlNDg5OGNkOGJiYzQ5ZTUyOTQ3NTc0MzdiMjNjMDgzMzhjYTFkMjRmNzBhY2NhYq3ofEE=: 00:20:56.139 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.399 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.971 00:20:56.971 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.971 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.971 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.232 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.232 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.232 09:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.232 09:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.232 09:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.232 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.232 { 00:20:57.232 "cntlid": 137, 00:20:57.232 "qid": 0, 00:20:57.232 "state": "enabled", 00:20:57.232 "thread": "nvmf_tgt_poll_group_000", 00:20:57.232 "listen_address": { 00:20:57.232 "trtype": "TCP", 00:20:57.232 "adrfam": "IPv4", 00:20:57.232 "traddr": "10.0.0.2", 00:20:57.232 "trsvcid": "4420" 00:20:57.232 }, 00:20:57.232 "peer_address": { 00:20:57.232 "trtype": "TCP", 00:20:57.232 "adrfam": "IPv4", 00:20:57.232 "traddr": "10.0.0.1", 00:20:57.232 "trsvcid": "47408" 00:20:57.232 }, 00:20:57.232 "auth": { 00:20:57.232 "state": "completed", 00:20:57.232 "digest": "sha512", 00:20:57.232 "dhgroup": "ffdhe8192" 00:20:57.232 } 00:20:57.232 } 00:20:57.232 ]' 00:20:57.232 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.232 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.232 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.232 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.232 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.232 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.232 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.232 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.493 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTdlOGRjMTUxZWQ3OTg2MjRjZTQwNjg5ZmZmZGY1ZDJiYjVlMGVhN2M5Y2JjNzM2wwp8yw==: --dhchap-ctrl-secret DHHC-1:03:YTFmZWQxOTU0MzEwZDY0OGE1ZTQzOGQ0M2Q2NmY0MmJhZWZmNDU2NDA5MzBkODAzZTgzOWZjMjM1ZDI3NWI4ZksrVuA=: 00:20:58.062 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.062 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:58.062 09:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.062 09:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.062 09:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.062 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.062 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:58.062 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:58.323 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:58.323 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.323 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:58.323 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:58.323 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:58.323 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.323 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.323 09:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.323 09:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.323 09:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.323 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.323 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.894 00:20:58.894 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.894 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.894 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.894 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.894 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.894 09:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.894 09:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.894 09:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.894 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.894 { 00:20:58.894 "cntlid": 139, 00:20:58.894 "qid": 0, 00:20:58.894 "state": "enabled", 00:20:58.894 "thread": "nvmf_tgt_poll_group_000", 00:20:58.894 "listen_address": { 00:20:58.894 "trtype": "TCP", 00:20:58.894 "adrfam": "IPv4", 00:20:58.894 "traddr": "10.0.0.2", 00:20:58.894 "trsvcid": "4420" 00:20:58.894 }, 00:20:58.894 "peer_address": { 00:20:58.894 "trtype": "TCP", 00:20:58.894 "adrfam": "IPv4", 00:20:58.894 "traddr": "10.0.0.1", 00:20:58.895 "trsvcid": "47448" 00:20:58.895 }, 00:20:58.895 "auth": { 00:20:58.895 "state": "completed", 00:20:58.895 "digest": "sha512", 00:20:58.895 "dhgroup": "ffdhe8192" 00:20:58.895 } 00:20:58.895 } 00:20:58.895 ]' 00:20:58.895 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.155 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.155 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.155 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:59.155 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.155 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.155 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.155 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.415 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjYyOGZiNTAyZTBhZDYxMzEwOGI4Yzg5ZmNkMTYzMDfXT6l9: --dhchap-ctrl-secret DHHC-1:02:NDAzNTFiOWUyOGFhNjhlNGI5NjMxZGFjZjc3OTczMjAyN2Q0YjdmMWYyMWRiOTQ0QIEdSw==: 00:20:59.985 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.985 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:59.985 09:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.985 09:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.985 09:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.985 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.985 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:59.985 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:00.245 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:00.245 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.245 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:00.245 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:00.245 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:00.245 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.245 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.245 09:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.245 09:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.245 09:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.245 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.245 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.817 00:21:00.817 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.817 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.817 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.817 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.817 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.817 09:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.817 09:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.817 09:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.817 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.817 { 00:21:00.817 "cntlid": 141, 00:21:00.817 "qid": 0, 00:21:00.817 "state": "enabled", 00:21:00.817 "thread": "nvmf_tgt_poll_group_000", 00:21:00.817 "listen_address": { 00:21:00.817 "trtype": "TCP", 00:21:00.817 "adrfam": "IPv4", 00:21:00.817 "traddr": "10.0.0.2", 00:21:00.817 "trsvcid": "4420" 00:21:00.817 }, 00:21:00.817 "peer_address": { 00:21:00.817 "trtype": "TCP", 00:21:00.817 "adrfam": "IPv4", 00:21:00.817 "traddr": "10.0.0.1", 00:21:00.817 "trsvcid": "47480" 00:21:00.817 }, 00:21:00.817 "auth": { 00:21:00.817 "state": "completed", 00:21:00.817 "digest": "sha512", 00:21:00.817 "dhgroup": "ffdhe8192" 00:21:00.817 } 00:21:00.817 } 00:21:00.817 ]' 00:21:00.817 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.817 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.817 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.817 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:01.078 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.078 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.078 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.078 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.078 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZDJlYjgxZGUzMmRlNjFiM2MyMWZmNzlkYWIzNTBmYWU3NmNkNmEwMjU5MGZjOTAyb6nGcQ==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5NGU3OGIyYzBhMWYzY2FhMTg3MGVhZmFlNjhjMGINfRK5: 00:21:02.019 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.019 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:02.019 09:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.019 09:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.019 09:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.019 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.019 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:02.019 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:02.019 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:02.019 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.019 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:02.019 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:02.019 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:02.019 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.019 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:21:02.019 09:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.019 09:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.020 09:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.020 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:02.020 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:02.591 00:21:02.591 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.591 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.592 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.592 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.852 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.852 09:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.852 09:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.852 09:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.852 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.852 { 00:21:02.852 "cntlid": 143, 00:21:02.852 "qid": 0, 00:21:02.852 "state": "enabled", 00:21:02.852 "thread": "nvmf_tgt_poll_group_000", 00:21:02.852 "listen_address": { 00:21:02.852 "trtype": "TCP", 00:21:02.852 "adrfam": "IPv4", 00:21:02.852 "traddr": "10.0.0.2", 00:21:02.852 "trsvcid": "4420" 00:21:02.852 }, 00:21:02.852 "peer_address": { 00:21:02.852 "trtype": "TCP", 00:21:02.852 "adrfam": "IPv4", 00:21:02.852 "traddr": "10.0.0.1", 00:21:02.852 "trsvcid": "47500" 00:21:02.852 }, 00:21:02.852 "auth": { 00:21:02.852 "state": "completed", 00:21:02.852 "digest": "sha512", 00:21:02.852 "dhgroup": "ffdhe8192" 00:21:02.852 } 00:21:02.852 } 00:21:02.852 ]' 00:21:02.852 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.852 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.852 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.852 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:02.852 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.852 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.852 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.852 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.115 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:MDc1YTFkZmUzMjEzODgxNjFlNDg5OGNkOGJiYzQ5ZTUyOTQ3NTc0MzdiMjNjMDgzMzhjYTFkMjRmNzBhY2NhYq3ofEE=: 00:21:03.686 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.686 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:03.686 09:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.686 09:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.686 09:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.686 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:03.686 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:03.686 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:03.686 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:03.686 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:03.686 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:03.946 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:03.946 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.946 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:03.946 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:03.946 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:03.946 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.946 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.946 09:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.946 09:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.946 09:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.946 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.946 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.516 00:21:04.516 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.516 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.516 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.516 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.516 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.516 09:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.516 09:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.516 09:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.516 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.516 { 00:21:04.516 "cntlid": 145, 00:21:04.516 "qid": 0, 00:21:04.516 "state": "enabled", 00:21:04.516 "thread": "nvmf_tgt_poll_group_000", 00:21:04.516 "listen_address": { 00:21:04.516 "trtype": "TCP", 00:21:04.516 "adrfam": "IPv4", 00:21:04.516 "traddr": "10.0.0.2", 00:21:04.516 "trsvcid": "4420" 00:21:04.516 }, 00:21:04.516 "peer_address": { 00:21:04.516 "trtype": "TCP", 00:21:04.516 "adrfam": "IPv4", 00:21:04.516 "traddr": "10.0.0.1", 00:21:04.516 "trsvcid": "47520" 00:21:04.516 }, 00:21:04.516 "auth": { 00:21:04.516 "state": "completed", 00:21:04.516 "digest": "sha512", 00:21:04.516 "dhgroup": "ffdhe8192" 00:21:04.516 } 00:21:04.516 } 00:21:04.516 ]' 00:21:04.516 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.777 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.777 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.777 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:04.777 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.777 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.777 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.777 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.037 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTdlOGRjMTUxZWQ3OTg2MjRjZTQwNjg5ZmZmZGY1ZDJiYjVlMGVhN2M5Y2JjNzM2wwp8yw==: --dhchap-ctrl-secret DHHC-1:03:YTFmZWQxOTU0MzEwZDY0OGE1ZTQzOGQ0M2Q2NmY0MmJhZWZmNDU2NDA5MzBkODAzZTgzOWZjMjM1ZDI3NWI4ZksrVuA=: 00:21:05.631 09:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.631 09:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:05.631 09:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.631 09:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.631 09:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.631 09:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:21:05.631 09:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.631 09:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.631 09:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.631 09:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:05.631 09:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:05.631 09:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:05.631 09:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:05.631 09:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:05.631 09:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:05.631 09:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:05.631 09:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:05.631 09:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:06.211 request: 00:21:06.211 { 00:21:06.211 "name": "nvme0", 00:21:06.211 "trtype": "tcp", 00:21:06.211 "traddr": "10.0.0.2", 00:21:06.211 "adrfam": "ipv4", 00:21:06.211 "trsvcid": "4420", 00:21:06.211 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:06.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:06.211 "prchk_reftag": false, 00:21:06.212 "prchk_guard": false, 00:21:06.212 "hdgst": false, 00:21:06.212 "ddgst": false, 00:21:06.212 "dhchap_key": "key2", 00:21:06.212 "method": "bdev_nvme_attach_controller", 00:21:06.212 "req_id": 1 00:21:06.212 } 00:21:06.212 Got JSON-RPC error response 00:21:06.212 response: 00:21:06.212 { 00:21:06.212 "code": -5, 00:21:06.212 "message": "Input/output error" 00:21:06.212 } 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:06.212 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:06.782 request: 00:21:06.782 { 00:21:06.782 "name": "nvme0", 00:21:06.782 "trtype": "tcp", 00:21:06.782 "traddr": "10.0.0.2", 00:21:06.782 "adrfam": "ipv4", 00:21:06.782 "trsvcid": "4420", 00:21:06.782 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:06.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:06.782 "prchk_reftag": false, 00:21:06.782 "prchk_guard": false, 00:21:06.782 "hdgst": false, 00:21:06.782 "ddgst": false, 00:21:06.782 "dhchap_key": "key1", 00:21:06.782 "dhchap_ctrlr_key": "ckey2", 00:21:06.782 "method": "bdev_nvme_attach_controller", 00:21:06.782 "req_id": 1 00:21:06.782 } 00:21:06.782 Got JSON-RPC error response 00:21:06.782 response: 00:21:06.782 { 00:21:06.782 "code": -5, 00:21:06.782 "message": "Input/output error" 00:21:06.782 } 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.782 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.042 request: 00:21:07.042 { 00:21:07.042 "name": "nvme0", 00:21:07.042 "trtype": "tcp", 00:21:07.042 "traddr": "10.0.0.2", 00:21:07.042 "adrfam": "ipv4", 00:21:07.042 "trsvcid": "4420", 00:21:07.042 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:07.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:07.042 "prchk_reftag": false, 00:21:07.042 "prchk_guard": false, 00:21:07.042 "hdgst": false, 00:21:07.042 "ddgst": false, 00:21:07.042 "dhchap_key": "key1", 00:21:07.042 "dhchap_ctrlr_key": "ckey1", 00:21:07.042 "method": "bdev_nvme_attach_controller", 00:21:07.042 "req_id": 1 00:21:07.042 } 00:21:07.042 Got JSON-RPC error response 00:21:07.042 response: 00:21:07.042 { 00:21:07.042 "code": -5, 00:21:07.042 "message": "Input/output error" 00:21:07.042 } 00:21:07.042 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:07.042 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:07.042 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:07.042 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:07.042 09:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:07.042 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.042 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.042 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.043 09:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 696444 00:21:07.043 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 696444 ']' 00:21:07.043 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 696444 00:21:07.043 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:07.043 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:07.043 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 696444 00:21:07.303 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:07.303 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:07.303 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 696444' 00:21:07.303 killing process with pid 696444 00:21:07.303 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 696444 00:21:07.303 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 696444 00:21:07.303 09:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:07.303 09:29:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:07.303 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:07.303 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.303 09:29:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=721234 00:21:07.303 09:29:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 721234 00:21:07.303 09:29:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:07.303 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 721234 ']' 00:21:07.303 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.303 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.303 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.303 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.303 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.267 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.267 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:08.267 09:29:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:08.267 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:08.267 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.267 09:29:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.267 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:08.267 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 721234 00:21:08.267 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 721234 ']' 00:21:08.267 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.267 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:08.267 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.267 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:08.267 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.267 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.267 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:08.267 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:08.267 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.267 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.528 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.528 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:08.528 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.529 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.529 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:08.529 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:08.529 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.529 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:21:08.529 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.529 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.529 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.529 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.529 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.101 00:21:09.101 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.101 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.101 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.101 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.101 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.101 09:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.101 09:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.101 09:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.101 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.101 { 00:21:09.101 "cntlid": 1, 00:21:09.101 "qid": 0, 00:21:09.101 "state": "enabled", 00:21:09.101 "thread": "nvmf_tgt_poll_group_000", 00:21:09.101 "listen_address": { 00:21:09.101 "trtype": "TCP", 00:21:09.101 "adrfam": "IPv4", 00:21:09.101 "traddr": "10.0.0.2", 00:21:09.101 "trsvcid": "4420" 00:21:09.101 }, 00:21:09.101 "peer_address": { 00:21:09.101 "trtype": "TCP", 00:21:09.101 "adrfam": "IPv4", 00:21:09.101 "traddr": "10.0.0.1", 00:21:09.101 "trsvcid": "52134" 00:21:09.101 }, 00:21:09.101 "auth": { 00:21:09.101 "state": "completed", 00:21:09.101 "digest": "sha512", 00:21:09.101 "dhgroup": "ffdhe8192" 00:21:09.101 } 00:21:09.101 } 00:21:09.101 ]' 00:21:09.101 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.101 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.101 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.362 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:09.362 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.362 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.362 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.362 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.362 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:MDc1YTFkZmUzMjEzODgxNjFlNDg5OGNkOGJiYzQ5ZTUyOTQ3NTc0MzdiMjNjMDgzMzhjYTFkMjRmNzBhY2NhYq3ofEE=: 00:21:10.304 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.304 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:10.304 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.304 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.304 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.304 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:21:10.304 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.304 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.304 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.304 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:10.304 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:10.305 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.305 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:10.305 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.305 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:10.305 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:10.305 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:10.305 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:10.305 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.305 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.566 request: 00:21:10.566 { 00:21:10.566 "name": "nvme0", 00:21:10.566 "trtype": "tcp", 00:21:10.566 "traddr": "10.0.0.2", 00:21:10.566 "adrfam": "ipv4", 00:21:10.566 "trsvcid": "4420", 00:21:10.566 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:10.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:10.566 "prchk_reftag": false, 00:21:10.566 "prchk_guard": false, 00:21:10.566 "hdgst": false, 00:21:10.566 "ddgst": false, 00:21:10.566 "dhchap_key": "key3", 00:21:10.566 "method": "bdev_nvme_attach_controller", 00:21:10.566 "req_id": 1 00:21:10.566 } 00:21:10.566 Got JSON-RPC error response 00:21:10.566 response: 00:21:10.566 { 00:21:10.566 "code": -5, 00:21:10.566 "message": "Input/output error" 00:21:10.566 } 00:21:10.566 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:10.566 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:10.566 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:10.566 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:10.566 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:10.566 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:10.566 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:10.566 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:10.566 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.566 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:10.566 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.566 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:10.566 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:10.566 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:10.566 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:10.566 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.566 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.827 request: 00:21:10.827 { 00:21:10.827 "name": "nvme0", 00:21:10.827 "trtype": "tcp", 00:21:10.827 "traddr": "10.0.0.2", 00:21:10.827 "adrfam": "ipv4", 00:21:10.827 "trsvcid": "4420", 00:21:10.827 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:10.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:10.827 "prchk_reftag": false, 00:21:10.827 "prchk_guard": false, 00:21:10.827 "hdgst": false, 00:21:10.827 "ddgst": false, 00:21:10.827 "dhchap_key": "key3", 00:21:10.827 "method": "bdev_nvme_attach_controller", 00:21:10.827 "req_id": 1 00:21:10.827 } 00:21:10.827 Got JSON-RPC error response 00:21:10.827 response: 00:21:10.827 { 00:21:10.828 "code": -5, 00:21:10.828 "message": "Input/output error" 00:21:10.828 } 00:21:10.828 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:10.828 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:10.828 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:10.828 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:10.828 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:10.828 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:10.828 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:10.828 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:10.828 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:10.828 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:10.828 09:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:10.828 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.828 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.828 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.828 09:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:10.828 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.828 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.089 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.089 09:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:11.089 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:11.089 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:11.089 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:11.089 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:11.089 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:11.089 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:11.089 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:11.089 09:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:11.089 request: 00:21:11.089 { 00:21:11.089 "name": "nvme0", 00:21:11.089 "trtype": "tcp", 00:21:11.089 "traddr": "10.0.0.2", 00:21:11.089 "adrfam": "ipv4", 00:21:11.089 "trsvcid": "4420", 00:21:11.089 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:11.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:11.089 "prchk_reftag": false, 00:21:11.089 "prchk_guard": false, 00:21:11.089 "hdgst": false, 00:21:11.089 "ddgst": false, 00:21:11.089 "dhchap_key": "key0", 00:21:11.089 "dhchap_ctrlr_key": "key1", 00:21:11.089 "method": "bdev_nvme_attach_controller", 00:21:11.089 "req_id": 1 00:21:11.089 } 00:21:11.089 Got JSON-RPC error response 00:21:11.089 response: 00:21:11.089 { 00:21:11.089 "code": -5, 00:21:11.089 "message": "Input/output error" 00:21:11.089 } 00:21:11.089 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:11.089 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:11.089 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:11.089 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:11.089 09:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:11.089 09:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:11.350 00:21:11.350 09:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:11.350 09:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:11.350 09:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.610 09:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.610 09:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.610 09:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.610 09:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:11.610 09:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:11.610 09:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 696647 00:21:11.610 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 696647 ']' 00:21:11.610 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 696647 00:21:11.610 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:11.610 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:11.610 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 696647 00:21:11.610 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:11.610 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:11.610 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 696647' 00:21:11.610 killing process with pid 696647 00:21:11.610 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 696647 00:21:11.610 09:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 696647 00:21:11.871 09:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:11.871 09:29:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:11.871 09:29:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:11.871 09:29:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:11.871 09:29:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:11.871 09:29:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:11.871 09:29:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:11.871 rmmod nvme_tcp 00:21:11.871 rmmod nvme_fabrics 00:21:11.871 rmmod nvme_keyring 00:21:11.871 09:29:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:11.871 09:29:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:11.871 09:29:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:11.871 09:29:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 721234 ']' 00:21:11.871 09:29:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 721234 00:21:11.871 09:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 721234 ']' 00:21:11.871 09:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 721234 00:21:11.871 09:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:11.871 09:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:11.871 09:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 721234 00:21:12.131 09:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:12.132 09:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:12.132 09:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 721234' 00:21:12.132 killing process with pid 721234 00:21:12.132 09:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 721234 00:21:12.132 09:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 721234 00:21:12.132 09:29:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:12.132 09:29:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:12.132 09:29:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:12.132 09:29:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:12.132 09:29:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:12.132 09:29:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.132 09:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:12.132 09:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.678 09:30:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:14.678 09:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.GSU /tmp/spdk.key-sha256.cbw /tmp/spdk.key-sha384.cFh /tmp/spdk.key-sha512.Nfp /tmp/spdk.key-sha512.kNg /tmp/spdk.key-sha384.Vty /tmp/spdk.key-sha256.4jS '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:14.678 00:21:14.678 real 2m18.283s 00:21:14.678 user 5m6.956s 00:21:14.678 sys 0m19.542s 00:21:14.678 09:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:14.678 09:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.678 ************************************ 00:21:14.678 END TEST nvmf_auth_target 00:21:14.678 ************************************ 00:21:14.678 09:30:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:14.678 09:30:01 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:14.678 09:30:01 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:14.678 09:30:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:14.678 09:30:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:14.678 09:30:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:14.678 ************************************ 00:21:14.678 START TEST nvmf_bdevio_no_huge 00:21:14.678 ************************************ 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:14.678 * Looking for test storage... 00:21:14.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:14.678 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:14.679 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:14.679 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.679 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.679 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.679 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:14.679 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:14.679 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:14.679 09:30:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:22.842 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:22.842 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:22.842 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:22.843 Found net devices under 0000:31:00.0: cvl_0_0 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:22.843 Found net devices under 0000:31:00.1: cvl_0_1 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:22.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:21:22.843 00:21:22.843 --- 10.0.0.2 ping statistics --- 00:21:22.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.843 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:21:22.843 00:21:22.843 --- 10.0.0.1 ping statistics --- 00:21:22.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.843 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=727293 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 727293 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 727293 ']' 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.843 09:30:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:22.843 [2024-07-15 09:30:09.737354] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:21:22.843 [2024-07-15 09:30:09.737410] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:22.843 [2024-07-15 09:30:09.835319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:22.843 [2024-07-15 09:30:09.941104] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.843 [2024-07-15 09:30:09.941156] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.843 [2024-07-15 09:30:09.941164] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.843 [2024-07-15 09:30:09.941171] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.843 [2024-07-15 09:30:09.941177] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.843 [2024-07-15 09:30:09.941344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:22.843 [2024-07-15 09:30:09.941504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:22.843 [2024-07-15 09:30:09.941663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:22.843 [2024-07-15 09:30:09.941664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:23.414 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.414 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:21:23.414 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:23.414 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:23.414 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.414 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.415 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:23.415 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.415 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.415 [2024-07-15 09:30:10.582760] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.415 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.415 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:23.415 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.415 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.415 Malloc0 00:21:23.415 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.415 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.415 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.415 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.674 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.674 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:23.674 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.674 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.674 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.674 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.674 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.674 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.674 [2024-07-15 09:30:10.636363] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.674 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.675 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:23.675 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:23.675 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:23.675 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:23.675 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.675 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.675 { 00:21:23.675 "params": { 00:21:23.675 "name": "Nvme$subsystem", 00:21:23.675 "trtype": "$TEST_TRANSPORT", 00:21:23.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.675 "adrfam": "ipv4", 00:21:23.675 "trsvcid": "$NVMF_PORT", 00:21:23.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.675 "hdgst": ${hdgst:-false}, 00:21:23.675 "ddgst": ${ddgst:-false} 00:21:23.675 }, 00:21:23.675 "method": "bdev_nvme_attach_controller" 00:21:23.675 } 00:21:23.675 EOF 00:21:23.675 )") 00:21:23.675 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:23.675 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:23.675 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:23.675 09:30:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:23.675 "params": { 00:21:23.675 "name": "Nvme1", 00:21:23.675 "trtype": "tcp", 00:21:23.675 "traddr": "10.0.0.2", 00:21:23.675 "adrfam": "ipv4", 00:21:23.675 "trsvcid": "4420", 00:21:23.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.675 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:23.675 "hdgst": false, 00:21:23.675 "ddgst": false 00:21:23.675 }, 00:21:23.675 "method": "bdev_nvme_attach_controller" 00:21:23.675 }' 00:21:23.675 [2024-07-15 09:30:10.692634] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:21:23.675 [2024-07-15 09:30:10.692706] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid727577 ] 00:21:23.675 [2024-07-15 09:30:10.770709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:23.675 [2024-07-15 09:30:10.867608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.675 [2024-07-15 09:30:10.867724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.675 [2024-07-15 09:30:10.867728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.935 I/O targets: 00:21:23.935 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:23.935 00:21:23.935 00:21:23.935 CUnit - A unit testing framework for C - Version 2.1-3 00:21:23.935 http://cunit.sourceforge.net/ 00:21:23.935 00:21:23.935 00:21:23.935 Suite: bdevio tests on: Nvme1n1 00:21:23.935 Test: blockdev write read block ...passed 00:21:23.935 Test: blockdev write zeroes read block ...passed 00:21:23.935 Test: blockdev write zeroes read no split ...passed 00:21:23.935 Test: blockdev write zeroes read split ...passed 00:21:24.194 Test: blockdev write zeroes read split partial ...passed 00:21:24.194 Test: blockdev reset ...[2024-07-15 09:30:11.185119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.194 [2024-07-15 09:30:11.185185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d6970 (9): Bad file descriptor 00:21:24.194 [2024-07-15 09:30:11.255599] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:24.194 passed 00:21:24.194 Test: blockdev write read 8 blocks ...passed 00:21:24.194 Test: blockdev write read size > 128k ...passed 00:21:24.194 Test: blockdev write read invalid size ...passed 00:21:24.194 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:24.194 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:24.194 Test: blockdev write read max offset ...passed 00:21:24.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:24.455 Test: blockdev writev readv 8 blocks ...passed 00:21:24.455 Test: blockdev writev readv 30 x 1block ...passed 00:21:24.455 Test: blockdev writev readv block ...passed 00:21:24.455 Test: blockdev writev readv size > 128k ...passed 00:21:24.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:24.455 Test: blockdev comparev and writev ...[2024-07-15 09:30:11.518776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.455 [2024-07-15 09:30:11.518801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.455 [2024-07-15 09:30:11.518812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.455 [2024-07-15 09:30:11.518818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:24.455 [2024-07-15 09:30:11.519153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.455 [2024-07-15 09:30:11.519162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:24.455 [2024-07-15 09:30:11.519172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.455 [2024-07-15 09:30:11.519179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:24.455 [2024-07-15 09:30:11.519546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.455 [2024-07-15 09:30:11.519555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:24.455 [2024-07-15 09:30:11.519565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.455 [2024-07-15 09:30:11.519570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:24.455 [2024-07-15 09:30:11.519940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.455 [2024-07-15 09:30:11.519949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:24.455 [2024-07-15 09:30:11.519961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.455 [2024-07-15 09:30:11.519967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:24.455 passed 00:21:24.455 Test: blockdev nvme passthru rw ...passed 00:21:24.455 Test: blockdev nvme passthru vendor specific ...[2024-07-15 09:30:11.604354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.455 [2024-07-15 09:30:11.604369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:24.455 [2024-07-15 09:30:11.604585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.455 [2024-07-15 09:30:11.604594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:24.455 [2024-07-15 09:30:11.604826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.455 [2024-07-15 09:30:11.604835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:24.455 [2024-07-15 09:30:11.605105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.455 [2024-07-15 09:30:11.605113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:24.455 passed 00:21:24.455 Test: blockdev nvme admin passthru ...passed 00:21:24.715 Test: blockdev copy ...passed 00:21:24.715 00:21:24.715 Run Summary: Type Total Ran Passed Failed Inactive 00:21:24.715 suites 1 1 n/a 0 0 00:21:24.715 tests 23 23 23 0 0 00:21:24.715 asserts 152 152 152 0 n/a 00:21:24.715 00:21:24.715 Elapsed time = 1.311 seconds 00:21:24.715 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:24.715 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.715 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:24.976 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.976 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:24.976 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:24.976 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:24.976 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:24.976 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:24.976 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:24.976 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:24.976 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:24.976 rmmod nvme_tcp 00:21:24.976 rmmod nvme_fabrics 00:21:24.976 rmmod nvme_keyring 00:21:24.976 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:24.976 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:24.976 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:24.976 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 727293 ']' 00:21:24.976 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 727293 00:21:24.976 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 727293 ']' 00:21:24.976 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 727293 00:21:24.976 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:21:24.976 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:24.976 09:30:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 727293 00:21:24.976 09:30:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:21:24.976 09:30:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:21:24.976 09:30:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 727293' 00:21:24.976 killing process with pid 727293 00:21:24.976 09:30:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 727293 00:21:24.976 09:30:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 727293 00:21:25.236 09:30:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:25.236 09:30:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:25.236 09:30:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:25.236 09:30:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.236 09:30:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:25.236 09:30:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.236 09:30:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.236 09:30:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.784 09:30:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:27.784 00:21:27.784 real 0m13.029s 00:21:27.784 user 0m13.789s 00:21:27.784 sys 0m7.031s 00:21:27.784 09:30:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:27.784 09:30:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.784 ************************************ 00:21:27.784 END TEST nvmf_bdevio_no_huge 00:21:27.784 ************************************ 00:21:27.784 09:30:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:27.784 09:30:14 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:27.784 09:30:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:27.784 09:30:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:27.784 09:30:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:27.784 ************************************ 00:21:27.784 START TEST nvmf_tls 00:21:27.784 ************************************ 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:27.784 * Looking for test storage... 00:21:27.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:27.784 09:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.948 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.948 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:35.948 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:35.948 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:35.948 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:35.948 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:35.948 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:35.948 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:35.948 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:35.948 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:35.948 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:35.948 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:35.948 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:35.948 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:35.948 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:35.948 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.948 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.948 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.948 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:35.949 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:35.949 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:35.949 Found net devices under 0000:31:00.0: cvl_0_0 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:35.949 Found net devices under 0000:31:00.1: cvl_0_1 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:35.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:21:35.949 00:21:35.949 --- 10.0.0.2 ping statistics --- 00:21:35.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.949 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:21:35.949 00:21:35.949 --- 10.0.0.1 ping statistics --- 00:21:35.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.949 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=732550 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 732550 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 732550 ']' 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.949 09:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.949 [2024-07-15 09:30:22.745968] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:21:35.949 [2024-07-15 09:30:22.746026] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.949 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.949 [2024-07-15 09:30:22.839135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.949 [2024-07-15 09:30:22.902422] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.949 [2024-07-15 09:30:22.902457] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.949 [2024-07-15 09:30:22.902465] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.949 [2024-07-15 09:30:22.902472] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.949 [2024-07-15 09:30:22.902477] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.949 [2024-07-15 09:30:22.902496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.519 09:30:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:36.519 09:30:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:36.519 09:30:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:36.519 09:30:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:36.519 09:30:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.519 09:30:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.519 09:30:23 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:36.519 09:30:23 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:36.779 true 00:21:36.779 09:30:23 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:36.779 09:30:23 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:36.779 09:30:23 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:36.779 09:30:23 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:36.779 09:30:23 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:37.039 09:30:24 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:37.039 09:30:24 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:37.338 09:30:24 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:37.338 09:30:24 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:37.338 09:30:24 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:37.338 09:30:24 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:37.338 09:30:24 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:37.684 09:30:24 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:37.684 09:30:24 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:37.684 09:30:24 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:37.684 09:30:24 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:37.684 09:30:24 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:37.684 09:30:24 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:37.684 09:30:24 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:37.944 09:30:24 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:37.944 09:30:24 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:37.944 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:37.944 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:37.944 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:38.205 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:38.205 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.xybYPI9MRx 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.SDeyLaygSW 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.xybYPI9MRx 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.SDeyLaygSW 00:21:38.465 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:38.725 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:38.985 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.xybYPI9MRx 00:21:38.985 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.xybYPI9MRx 00:21:38.985 09:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:38.985 [2024-07-15 09:30:26.125501] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.985 09:30:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:39.245 09:30:26 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:39.517 [2024-07-15 09:30:26.446236] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:39.517 [2024-07-15 09:30:26.446398] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.517 09:30:26 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:39.517 malloc0 00:21:39.517 09:30:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:39.781 09:30:26 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xybYPI9MRx 00:21:39.781 [2024-07-15 09:30:26.913406] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:39.781 09:30:26 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.xybYPI9MRx 00:21:39.781 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.034 Initializing NVMe Controllers 00:21:52.034 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:52.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:52.034 Initialization complete. Launching workers. 00:21:52.034 ======================================================== 00:21:52.034 Latency(us) 00:21:52.034 Device Information : IOPS MiB/s Average min max 00:21:52.034 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19096.26 74.59 3351.41 1124.12 4517.92 00:21:52.034 ======================================================== 00:21:52.034 Total : 19096.26 74.59 3351.41 1124.12 4517.92 00:21:52.034 00:21:52.034 09:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xybYPI9MRx 00:21:52.034 09:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:52.034 09:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:52.034 09:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:52.034 09:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xybYPI9MRx' 00:21:52.034 09:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:52.034 09:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=735362 00:21:52.034 09:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:52.034 09:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 735362 /var/tmp/bdevperf.sock 00:21:52.034 09:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:52.034 09:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 735362 ']' 00:21:52.034 09:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:52.034 09:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.034 09:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:52.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:52.034 09:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.034 09:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.034 [2024-07-15 09:30:37.081634] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:21:52.035 [2024-07-15 09:30:37.081688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid735362 ] 00:21:52.035 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.035 [2024-07-15 09:30:37.136676] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.035 [2024-07-15 09:30:37.189304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.035 09:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.035 09:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:52.035 09:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xybYPI9MRx 00:21:52.035 [2024-07-15 09:30:38.001434] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:52.035 [2024-07-15 09:30:38.001489] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:52.035 TLSTESTn1 00:21:52.035 09:30:38 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:52.035 Running I/O for 10 seconds... 00:22:02.039 00:22:02.039 Latency(us) 00:22:02.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.039 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:02.039 Verification LBA range: start 0x0 length 0x2000 00:22:02.039 TLSTESTn1 : 10.01 4092.77 15.99 0.00 0.00 31234.92 5106.35 78206.29 00:22:02.039 =================================================================================================================== 00:22:02.039 Total : 4092.77 15.99 0.00 0.00 31234.92 5106.35 78206.29 00:22:02.039 0 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 735362 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 735362 ']' 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 735362 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 735362 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 735362' 00:22:02.039 killing process with pid 735362 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 735362 00:22:02.039 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.039 00:22:02.039 Latency(us) 00:22:02.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.039 =================================================================================================================== 00:22:02.039 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.039 [2024-07-15 09:30:48.301700] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 735362 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SDeyLaygSW 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SDeyLaygSW 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SDeyLaygSW 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.SDeyLaygSW' 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=737628 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 737628 /var/tmp/bdevperf.sock 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 737628 ']' 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.039 09:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.039 [2024-07-15 09:30:48.466346] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:02.039 [2024-07-15 09:30:48.466397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737628 ] 00:22:02.039 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.039 [2024-07-15 09:30:48.522246] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.039 [2024-07-15 09:30:48.573330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.299 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.299 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:02.299 09:30:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SDeyLaygSW 00:22:02.299 [2024-07-15 09:30:49.385544] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.299 [2024-07-15 09:30:49.385610] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:02.299 [2024-07-15 09:30:49.389994] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:02.299 [2024-07-15 09:30:49.390647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc1d80 (107): Transport endpoint is not connected 00:22:02.299 [2024-07-15 09:30:49.391642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc1d80 (9): Bad file descriptor 00:22:02.299 [2024-07-15 09:30:49.392644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.299 [2024-07-15 09:30:49.392652] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:02.299 [2024-07-15 09:30:49.392659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.299 request: 00:22:02.299 { 00:22:02.299 "name": "TLSTEST", 00:22:02.299 "trtype": "tcp", 00:22:02.299 "traddr": "10.0.0.2", 00:22:02.299 "adrfam": "ipv4", 00:22:02.299 "trsvcid": "4420", 00:22:02.299 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.299 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.299 "prchk_reftag": false, 00:22:02.299 "prchk_guard": false, 00:22:02.299 "hdgst": false, 00:22:02.299 "ddgst": false, 00:22:02.299 "psk": "/tmp/tmp.SDeyLaygSW", 00:22:02.299 "method": "bdev_nvme_attach_controller", 00:22:02.299 "req_id": 1 00:22:02.299 } 00:22:02.299 Got JSON-RPC error response 00:22:02.299 response: 00:22:02.299 { 00:22:02.299 "code": -5, 00:22:02.299 "message": "Input/output error" 00:22:02.299 } 00:22:02.299 09:30:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 737628 00:22:02.299 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 737628 ']' 00:22:02.299 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 737628 00:22:02.299 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:02.299 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:02.299 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 737628 00:22:02.299 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:02.299 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:02.299 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 737628' 00:22:02.299 killing process with pid 737628 00:22:02.300 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 737628 00:22:02.300 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.300 00:22:02.300 Latency(us) 00:22:02.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.300 =================================================================================================================== 00:22:02.300 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:02.300 [2024-07-15 09:30:49.477825] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:02.300 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 737628 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xybYPI9MRx 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xybYPI9MRx 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xybYPI9MRx 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xybYPI9MRx' 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=737916 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 737916 /var/tmp/bdevperf.sock 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 737916 ']' 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.562 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.563 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.563 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.563 09:30:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.563 [2024-07-15 09:30:49.635170] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:02.563 [2024-07-15 09:30:49.635223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737916 ] 00:22:02.563 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.563 [2024-07-15 09:30:49.690831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.563 [2024-07-15 09:30:49.742851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.503 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.503 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:03.503 09:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.xybYPI9MRx 00:22:03.503 [2024-07-15 09:30:50.542894] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.503 [2024-07-15 09:30:50.542962] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:03.503 [2024-07-15 09:30:50.551821] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:03.503 [2024-07-15 09:30:50.551841] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:03.503 [2024-07-15 09:30:50.551859] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:03.503 [2024-07-15 09:30:50.551999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f5d80 (107): Transport endpoint is not connected 00:22:03.503 [2024-07-15 09:30:50.552975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f5d80 (9): Bad file descriptor 00:22:03.503 [2024-07-15 09:30:50.553981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.503 [2024-07-15 09:30:50.553990] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:03.503 [2024-07-15 09:30:50.553998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.503 request: 00:22:03.503 { 00:22:03.503 "name": "TLSTEST", 00:22:03.503 "trtype": "tcp", 00:22:03.503 "traddr": "10.0.0.2", 00:22:03.503 "adrfam": "ipv4", 00:22:03.503 "trsvcid": "4420", 00:22:03.503 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.503 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:03.503 "prchk_reftag": false, 00:22:03.503 "prchk_guard": false, 00:22:03.503 "hdgst": false, 00:22:03.503 "ddgst": false, 00:22:03.503 "psk": "/tmp/tmp.xybYPI9MRx", 00:22:03.503 "method": "bdev_nvme_attach_controller", 00:22:03.503 "req_id": 1 00:22:03.503 } 00:22:03.503 Got JSON-RPC error response 00:22:03.503 response: 00:22:03.503 { 00:22:03.503 "code": -5, 00:22:03.503 "message": "Input/output error" 00:22:03.503 } 00:22:03.503 09:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 737916 00:22:03.503 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 737916 ']' 00:22:03.503 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 737916 00:22:03.503 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:03.503 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:03.503 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 737916 00:22:03.503 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:03.503 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:03.503 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 737916' 00:22:03.503 killing process with pid 737916 00:22:03.503 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 737916 00:22:03.503 Received shutdown signal, test time was about 10.000000 seconds 00:22:03.503 00:22:03.503 Latency(us) 00:22:03.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.503 =================================================================================================================== 00:22:03.503 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:03.503 [2024-07-15 09:30:50.637076] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:03.503 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 737916 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xybYPI9MRx 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xybYPI9MRx 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xybYPI9MRx 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xybYPI9MRx' 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=738001 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 738001 /var/tmp/bdevperf.sock 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 738001 ']' 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:03.764 09:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.764 [2024-07-15 09:30:50.794919] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:03.764 [2024-07-15 09:30:50.794971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid738001 ] 00:22:03.764 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.764 [2024-07-15 09:30:50.850802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.764 [2024-07-15 09:30:50.902691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.704 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:04.704 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:04.704 09:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xybYPI9MRx 00:22:04.704 [2024-07-15 09:30:51.715123] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:04.704 [2024-07-15 09:30:51.715189] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:04.704 [2024-07-15 09:30:51.724657] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:04.704 [2024-07-15 09:30:51.724675] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:04.704 [2024-07-15 09:30:51.724693] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:04.704 [2024-07-15 09:30:51.725270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b13d80 (107): Transport endpoint is not connected 00:22:04.704 [2024-07-15 09:30:51.726266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b13d80 (9): Bad file descriptor 00:22:04.704 [2024-07-15 09:30:51.727267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:04.704 [2024-07-15 09:30:51.727276] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:04.704 [2024-07-15 09:30:51.727283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:04.704 request: 00:22:04.704 { 00:22:04.704 "name": "TLSTEST", 00:22:04.704 "trtype": "tcp", 00:22:04.704 "traddr": "10.0.0.2", 00:22:04.704 "adrfam": "ipv4", 00:22:04.704 "trsvcid": "4420", 00:22:04.704 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:04.704 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:04.704 "prchk_reftag": false, 00:22:04.704 "prchk_guard": false, 00:22:04.704 "hdgst": false, 00:22:04.704 "ddgst": false, 00:22:04.704 "psk": "/tmp/tmp.xybYPI9MRx", 00:22:04.704 "method": "bdev_nvme_attach_controller", 00:22:04.704 "req_id": 1 00:22:04.704 } 00:22:04.704 Got JSON-RPC error response 00:22:04.704 response: 00:22:04.704 { 00:22:04.704 "code": -5, 00:22:04.704 "message": "Input/output error" 00:22:04.704 } 00:22:04.704 09:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 738001 00:22:04.704 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 738001 ']' 00:22:04.704 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 738001 00:22:04.704 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:04.704 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:04.704 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 738001 00:22:04.704 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:04.704 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:04.704 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 738001' 00:22:04.704 killing process with pid 738001 00:22:04.704 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 738001 00:22:04.704 Received shutdown signal, test time was about 10.000000 seconds 00:22:04.704 00:22:04.704 Latency(us) 00:22:04.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.704 =================================================================================================================== 00:22:04.704 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:04.704 [2024-07-15 09:30:51.812652] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:04.704 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 738001 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=738325 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 738325 /var/tmp/bdevperf.sock 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 738325 ']' 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:04.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:04.965 09:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.965 [2024-07-15 09:30:51.944402] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:04.965 [2024-07-15 09:30:51.944445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid738325 ] 00:22:04.966 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.966 [2024-07-15 09:30:51.992166] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.966 [2024-07-15 09:30:52.042861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.966 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:04.966 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:04.966 09:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:05.226 [2024-07-15 09:30:52.266000] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:05.226 [2024-07-15 09:30:52.267182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23da460 (9): Bad file descriptor 00:22:05.226 [2024-07-15 09:30:52.268181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.226 [2024-07-15 09:30:52.268189] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:05.226 [2024-07-15 09:30:52.268196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.226 request: 00:22:05.226 { 00:22:05.226 "name": "TLSTEST", 00:22:05.226 "trtype": "tcp", 00:22:05.226 "traddr": "10.0.0.2", 00:22:05.226 "adrfam": "ipv4", 00:22:05.226 "trsvcid": "4420", 00:22:05.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:05.226 "prchk_reftag": false, 00:22:05.226 "prchk_guard": false, 00:22:05.226 "hdgst": false, 00:22:05.226 "ddgst": false, 00:22:05.226 "method": "bdev_nvme_attach_controller", 00:22:05.226 "req_id": 1 00:22:05.226 } 00:22:05.226 Got JSON-RPC error response 00:22:05.226 response: 00:22:05.227 { 00:22:05.227 "code": -5, 00:22:05.227 "message": "Input/output error" 00:22:05.227 } 00:22:05.227 09:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 738325 00:22:05.227 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 738325 ']' 00:22:05.227 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 738325 00:22:05.227 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:05.227 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:05.227 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 738325 00:22:05.227 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:05.227 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:05.227 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 738325' 00:22:05.227 killing process with pid 738325 00:22:05.227 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 738325 00:22:05.227 Received shutdown signal, test time was about 10.000000 seconds 00:22:05.227 00:22:05.227 Latency(us) 00:22:05.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.227 =================================================================================================================== 00:22:05.227 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:05.227 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 738325 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 732550 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 732550 ']' 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 732550 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 732550 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 732550' 00:22:05.488 killing process with pid 732550 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 732550 00:22:05.488 [2024-07-15 09:30:52.511797] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 732550 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Void7Inglv 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Void7Inglv 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:05.488 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.749 09:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=738461 00:22:05.749 09:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 738461 00:22:05.749 09:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:05.749 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 738461 ']' 00:22:05.749 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.749 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.749 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.749 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.749 09:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.749 [2024-07-15 09:30:52.745720] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:05.749 [2024-07-15 09:30:52.745777] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.749 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.749 [2024-07-15 09:30:52.836438] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.749 [2024-07-15 09:30:52.896880] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.749 [2024-07-15 09:30:52.896917] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.749 [2024-07-15 09:30:52.896923] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.749 [2024-07-15 09:30:52.896928] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.749 [2024-07-15 09:30:52.896932] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.749 [2024-07-15 09:30:52.896948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.322 09:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.322 09:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:06.322 09:30:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.322 09:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:06.322 09:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.583 09:30:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.583 09:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Void7Inglv 00:22:06.583 09:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Void7Inglv 00:22:06.583 09:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:06.583 [2024-07-15 09:30:53.687413] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.583 09:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:06.844 09:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:06.844 [2024-07-15 09:30:53.984117] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:06.844 [2024-07-15 09:30:53.984268] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.844 09:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:07.105 malloc0 00:22:07.105 09:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:07.105 09:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Void7Inglv 00:22:07.367 [2024-07-15 09:30:54.418834] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:07.367 09:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Void7Inglv 00:22:07.367 09:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:07.367 09:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:07.367 09:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:07.367 09:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Void7Inglv' 00:22:07.367 09:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:07.367 09:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=738829 00:22:07.367 09:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:07.367 09:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 738829 /var/tmp/bdevperf.sock 00:22:07.367 09:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:07.367 09:30:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 738829 ']' 00:22:07.367 09:30:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:07.367 09:30:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:07.367 09:30:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:07.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:07.367 09:30:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:07.367 09:30:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:07.367 [2024-07-15 09:30:54.491607] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:07.367 [2024-07-15 09:30:54.491659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid738829 ] 00:22:07.367 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.367 [2024-07-15 09:30:54.547893] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.630 [2024-07-15 09:30:54.601052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.203 09:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:08.203 09:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:08.203 09:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Void7Inglv 00:22:08.203 [2024-07-15 09:30:55.393220] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:08.203 [2024-07-15 09:30:55.393283] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:08.464 TLSTESTn1 00:22:08.464 09:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:08.464 Running I/O for 10 seconds... 00:22:18.468 00:22:18.468 Latency(us) 00:22:18.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.468 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:18.468 Verification LBA range: start 0x0 length 0x2000 00:22:18.468 TLSTESTn1 : 10.04 6346.59 24.79 0.00 0.00 20118.72 4505.60 35170.99 00:22:18.468 =================================================================================================================== 00:22:18.468 Total : 6346.59 24.79 0.00 0.00 20118.72 4505.60 35170.99 00:22:18.468 0 00:22:18.468 09:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:18.468 09:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 738829 00:22:18.468 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 738829 ']' 00:22:18.468 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 738829 00:22:18.468 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:18.468 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 738829 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 738829' 00:22:18.729 killing process with pid 738829 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 738829 00:22:18.729 Received shutdown signal, test time was about 10.000000 seconds 00:22:18.729 00:22:18.729 Latency(us) 00:22:18.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.729 =================================================================================================================== 00:22:18.729 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:18.729 [2024-07-15 09:31:05.715411] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 738829 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Void7Inglv 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Void7Inglv 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Void7Inglv 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Void7Inglv 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Void7Inglv' 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=741054 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 741054 /var/tmp/bdevperf.sock 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 741054 ']' 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.729 09:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.729 [2024-07-15 09:31:05.885179] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:18.730 [2024-07-15 09:31:05.885231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid741054 ] 00:22:18.730 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.990 [2024-07-15 09:31:05.940879] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.990 [2024-07-15 09:31:05.992766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.603 09:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:19.603 09:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:19.603 09:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Void7Inglv 00:22:19.864 [2024-07-15 09:31:06.804914] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:19.864 [2024-07-15 09:31:06.804959] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:19.864 [2024-07-15 09:31:06.804964] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Void7Inglv 00:22:19.864 request: 00:22:19.864 { 00:22:19.864 "name": "TLSTEST", 00:22:19.864 "trtype": "tcp", 00:22:19.864 "traddr": "10.0.0.2", 00:22:19.864 "adrfam": "ipv4", 00:22:19.864 "trsvcid": "4420", 00:22:19.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:19.864 "prchk_reftag": false, 00:22:19.864 "prchk_guard": false, 00:22:19.864 "hdgst": false, 00:22:19.864 "ddgst": false, 00:22:19.864 "psk": "/tmp/tmp.Void7Inglv", 00:22:19.864 "method": "bdev_nvme_attach_controller", 00:22:19.864 "req_id": 1 00:22:19.864 } 00:22:19.864 Got JSON-RPC error response 00:22:19.864 response: 00:22:19.864 { 00:22:19.864 "code": -1, 00:22:19.864 "message": "Operation not permitted" 00:22:19.864 } 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 741054 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 741054 ']' 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 741054 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 741054 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 741054' 00:22:19.864 killing process with pid 741054 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 741054 00:22:19.864 Received shutdown signal, test time was about 10.000000 seconds 00:22:19.864 00:22:19.864 Latency(us) 00:22:19.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.864 =================================================================================================================== 00:22:19.864 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 741054 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 738461 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 738461 ']' 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 738461 00:22:19.864 09:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:19.864 09:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:19.864 09:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 738461 00:22:19.864 09:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:19.864 09:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:19.864 09:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 738461' 00:22:19.864 killing process with pid 738461 00:22:19.864 09:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 738461 00:22:19.864 [2024-07-15 09:31:07.054109] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:19.864 09:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 738461 00:22:20.126 09:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:20.126 09:31:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:20.126 09:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:20.126 09:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.126 09:31:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=741396 00:22:20.126 09:31:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 741396 00:22:20.126 09:31:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:20.126 09:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 741396 ']' 00:22:20.126 09:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.126 09:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:20.126 09:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.126 09:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:20.126 09:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.126 [2024-07-15 09:31:07.230869] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:20.126 [2024-07-15 09:31:07.230917] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.126 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.126 [2024-07-15 09:31:07.316310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.387 [2024-07-15 09:31:07.368118] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.387 [2024-07-15 09:31:07.368154] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.387 [2024-07-15 09:31:07.368159] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.387 [2024-07-15 09:31:07.368163] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.388 [2024-07-15 09:31:07.368167] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.388 [2024-07-15 09:31:07.368184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.959 09:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:20.959 09:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:20.959 09:31:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:20.959 09:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:20.959 09:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.959 09:31:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.959 09:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Void7Inglv 00:22:20.959 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:20.959 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Void7Inglv 00:22:20.959 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:22:20.959 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:20.959 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:22:20.960 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:20.960 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.Void7Inglv 00:22:20.960 09:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Void7Inglv 00:22:20.960 09:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:21.221 [2024-07-15 09:31:08.169427] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.221 09:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:21.221 09:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:21.482 [2024-07-15 09:31:08.458123] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:21.482 [2024-07-15 09:31:08.458279] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.482 09:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:21.482 malloc0 00:22:21.482 09:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:21.744 09:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Void7Inglv 00:22:21.744 [2024-07-15 09:31:08.909085] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:21.744 [2024-07-15 09:31:08.909112] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:21.744 [2024-07-15 09:31:08.909133] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:21.744 request: 00:22:21.744 { 00:22:21.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.744 "host": "nqn.2016-06.io.spdk:host1", 00:22:21.744 "psk": "/tmp/tmp.Void7Inglv", 00:22:21.744 "method": "nvmf_subsystem_add_host", 00:22:21.744 "req_id": 1 00:22:21.744 } 00:22:21.744 Got JSON-RPC error response 00:22:21.744 response: 00:22:21.744 { 00:22:21.744 "code": -32603, 00:22:21.744 "message": "Internal error" 00:22:21.744 } 00:22:21.744 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:21.744 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:21.744 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:21.744 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:21.744 09:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 741396 00:22:21.744 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 741396 ']' 00:22:21.744 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 741396 00:22:21.744 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:21.744 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:21.744 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 741396 00:22:22.005 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:22.006 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:22.006 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 741396' 00:22:22.006 killing process with pid 741396 00:22:22.006 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 741396 00:22:22.006 09:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 741396 00:22:22.006 09:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Void7Inglv 00:22:22.006 09:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:22.006 09:31:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:22.006 09:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:22.006 09:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.006 09:31:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=741769 00:22:22.006 09:31:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 741769 00:22:22.006 09:31:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:22.006 09:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 741769 ']' 00:22:22.006 09:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.006 09:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.006 09:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.006 09:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.006 09:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.006 [2024-07-15 09:31:09.164511] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:22.006 [2024-07-15 09:31:09.164566] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.006 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.267 [2024-07-15 09:31:09.250833] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.267 [2024-07-15 09:31:09.305103] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.267 [2024-07-15 09:31:09.305134] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.267 [2024-07-15 09:31:09.305139] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.267 [2024-07-15 09:31:09.305147] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.267 [2024-07-15 09:31:09.305152] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.267 [2024-07-15 09:31:09.305165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.839 09:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.839 09:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:22.840 09:31:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:22.840 09:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:22.840 09:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.840 09:31:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.840 09:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Void7Inglv 00:22:22.840 09:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Void7Inglv 00:22:22.840 09:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:23.101 [2024-07-15 09:31:10.082480] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.101 09:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:23.101 09:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:23.362 [2024-07-15 09:31:10.375178] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:23.362 [2024-07-15 09:31:10.375335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.363 09:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:23.363 malloc0 00:22:23.363 09:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:23.624 09:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Void7Inglv 00:22:23.624 [2024-07-15 09:31:10.798108] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:23.624 09:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:23.624 09:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=742130 00:22:23.624 09:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:23.624 09:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 742130 /var/tmp/bdevperf.sock 00:22:23.624 09:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 742130 ']' 00:22:23.624 09:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.624 09:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:23.624 09:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.624 09:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:23.624 09:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.886 [2024-07-15 09:31:10.834253] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:23.886 [2024-07-15 09:31:10.834293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid742130 ] 00:22:23.886 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.886 [2024-07-15 09:31:10.881228] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.886 [2024-07-15 09:31:10.933949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.886 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.886 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:23.886 09:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Void7Inglv 00:22:24.147 [2024-07-15 09:31:11.144885] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:24.147 [2024-07-15 09:31:11.144940] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:24.147 TLSTESTn1 00:22:24.147 09:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:24.409 09:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:24.409 "subsystems": [ 00:22:24.409 { 00:22:24.409 "subsystem": "keyring", 00:22:24.409 "config": [] 00:22:24.409 }, 00:22:24.409 { 00:22:24.409 "subsystem": "iobuf", 00:22:24.409 "config": [ 00:22:24.409 { 00:22:24.409 "method": "iobuf_set_options", 00:22:24.409 "params": { 00:22:24.409 "small_pool_count": 8192, 00:22:24.409 "large_pool_count": 1024, 00:22:24.409 "small_bufsize": 8192, 00:22:24.409 "large_bufsize": 135168 00:22:24.409 } 00:22:24.409 } 00:22:24.409 ] 00:22:24.409 }, 00:22:24.409 { 00:22:24.409 "subsystem": "sock", 00:22:24.409 "config": [ 00:22:24.409 { 00:22:24.409 "method": "sock_set_default_impl", 00:22:24.409 "params": { 00:22:24.409 "impl_name": "posix" 00:22:24.409 } 00:22:24.409 }, 00:22:24.409 { 00:22:24.409 "method": "sock_impl_set_options", 00:22:24.409 "params": { 00:22:24.409 "impl_name": "ssl", 00:22:24.409 "recv_buf_size": 4096, 00:22:24.409 "send_buf_size": 4096, 00:22:24.409 "enable_recv_pipe": true, 00:22:24.409 "enable_quickack": false, 00:22:24.409 "enable_placement_id": 0, 00:22:24.409 "enable_zerocopy_send_server": true, 00:22:24.409 "enable_zerocopy_send_client": false, 00:22:24.409 "zerocopy_threshold": 0, 00:22:24.409 "tls_version": 0, 00:22:24.409 "enable_ktls": false 00:22:24.409 } 00:22:24.409 }, 00:22:24.409 { 00:22:24.409 "method": "sock_impl_set_options", 00:22:24.409 "params": { 00:22:24.409 "impl_name": "posix", 00:22:24.409 "recv_buf_size": 2097152, 00:22:24.409 "send_buf_size": 2097152, 00:22:24.409 "enable_recv_pipe": true, 00:22:24.409 "enable_quickack": false, 00:22:24.409 "enable_placement_id": 0, 00:22:24.409 "enable_zerocopy_send_server": true, 00:22:24.409 "enable_zerocopy_send_client": false, 00:22:24.409 "zerocopy_threshold": 0, 00:22:24.409 "tls_version": 0, 00:22:24.409 "enable_ktls": false 00:22:24.409 } 00:22:24.409 } 00:22:24.409 ] 00:22:24.409 }, 00:22:24.409 { 00:22:24.409 "subsystem": "vmd", 00:22:24.409 "config": [] 00:22:24.409 }, 00:22:24.409 { 00:22:24.409 "subsystem": "accel", 00:22:24.409 "config": [ 00:22:24.409 { 00:22:24.409 "method": "accel_set_options", 00:22:24.409 "params": { 00:22:24.409 "small_cache_size": 128, 00:22:24.409 "large_cache_size": 16, 00:22:24.409 "task_count": 2048, 00:22:24.409 "sequence_count": 2048, 00:22:24.409 "buf_count": 2048 00:22:24.409 } 00:22:24.409 } 00:22:24.409 ] 00:22:24.409 }, 00:22:24.409 { 00:22:24.409 "subsystem": "bdev", 00:22:24.409 "config": [ 00:22:24.409 { 00:22:24.409 "method": "bdev_set_options", 00:22:24.409 "params": { 00:22:24.409 "bdev_io_pool_size": 65535, 00:22:24.409 "bdev_io_cache_size": 256, 00:22:24.409 "bdev_auto_examine": true, 00:22:24.409 "iobuf_small_cache_size": 128, 00:22:24.409 "iobuf_large_cache_size": 16 00:22:24.409 } 00:22:24.409 }, 00:22:24.409 { 00:22:24.409 "method": "bdev_raid_set_options", 00:22:24.409 "params": { 00:22:24.409 "process_window_size_kb": 1024 00:22:24.409 } 00:22:24.409 }, 00:22:24.409 { 00:22:24.409 "method": "bdev_iscsi_set_options", 00:22:24.409 "params": { 00:22:24.409 "timeout_sec": 30 00:22:24.409 } 00:22:24.409 }, 00:22:24.409 { 00:22:24.409 "method": "bdev_nvme_set_options", 00:22:24.409 "params": { 00:22:24.409 "action_on_timeout": "none", 00:22:24.409 "timeout_us": 0, 00:22:24.409 "timeout_admin_us": 0, 00:22:24.409 "keep_alive_timeout_ms": 10000, 00:22:24.409 "arbitration_burst": 0, 00:22:24.409 "low_priority_weight": 0, 00:22:24.409 "medium_priority_weight": 0, 00:22:24.409 "high_priority_weight": 0, 00:22:24.409 "nvme_adminq_poll_period_us": 10000, 00:22:24.409 "nvme_ioq_poll_period_us": 0, 00:22:24.409 "io_queue_requests": 0, 00:22:24.409 "delay_cmd_submit": true, 00:22:24.409 "transport_retry_count": 4, 00:22:24.409 "bdev_retry_count": 3, 00:22:24.409 "transport_ack_timeout": 0, 00:22:24.409 "ctrlr_loss_timeout_sec": 0, 00:22:24.409 "reconnect_delay_sec": 0, 00:22:24.409 "fast_io_fail_timeout_sec": 0, 00:22:24.409 "disable_auto_failback": false, 00:22:24.409 "generate_uuids": false, 00:22:24.409 "transport_tos": 0, 00:22:24.409 "nvme_error_stat": false, 00:22:24.409 "rdma_srq_size": 0, 00:22:24.409 "io_path_stat": false, 00:22:24.410 "allow_accel_sequence": false, 00:22:24.410 "rdma_max_cq_size": 0, 00:22:24.410 "rdma_cm_event_timeout_ms": 0, 00:22:24.410 "dhchap_digests": [ 00:22:24.410 "sha256", 00:22:24.410 "sha384", 00:22:24.410 "sha512" 00:22:24.410 ], 00:22:24.410 "dhchap_dhgroups": [ 00:22:24.410 "null", 00:22:24.410 "ffdhe2048", 00:22:24.410 "ffdhe3072", 00:22:24.410 "ffdhe4096", 00:22:24.410 "ffdhe6144", 00:22:24.410 "ffdhe8192" 00:22:24.410 ] 00:22:24.410 } 00:22:24.410 }, 00:22:24.410 { 00:22:24.410 "method": "bdev_nvme_set_hotplug", 00:22:24.410 "params": { 00:22:24.410 "period_us": 100000, 00:22:24.410 "enable": false 00:22:24.410 } 00:22:24.410 }, 00:22:24.410 { 00:22:24.410 "method": "bdev_malloc_create", 00:22:24.410 "params": { 00:22:24.410 "name": "malloc0", 00:22:24.410 "num_blocks": 8192, 00:22:24.410 "block_size": 4096, 00:22:24.410 "physical_block_size": 4096, 00:22:24.410 "uuid": "73cb0589-be86-400d-9fe3-ae0fffb80031", 00:22:24.410 "optimal_io_boundary": 0 00:22:24.410 } 00:22:24.410 }, 00:22:24.410 { 00:22:24.410 "method": "bdev_wait_for_examine" 00:22:24.410 } 00:22:24.410 ] 00:22:24.410 }, 00:22:24.410 { 00:22:24.410 "subsystem": "nbd", 00:22:24.410 "config": [] 00:22:24.410 }, 00:22:24.410 { 00:22:24.410 "subsystem": "scheduler", 00:22:24.410 "config": [ 00:22:24.410 { 00:22:24.410 "method": "framework_set_scheduler", 00:22:24.410 "params": { 00:22:24.410 "name": "static" 00:22:24.410 } 00:22:24.410 } 00:22:24.410 ] 00:22:24.410 }, 00:22:24.410 { 00:22:24.410 "subsystem": "nvmf", 00:22:24.410 "config": [ 00:22:24.410 { 00:22:24.410 "method": "nvmf_set_config", 00:22:24.410 "params": { 00:22:24.410 "discovery_filter": "match_any", 00:22:24.410 "admin_cmd_passthru": { 00:22:24.410 "identify_ctrlr": false 00:22:24.410 } 00:22:24.410 } 00:22:24.410 }, 00:22:24.410 { 00:22:24.410 "method": "nvmf_set_max_subsystems", 00:22:24.410 "params": { 00:22:24.410 "max_subsystems": 1024 00:22:24.410 } 00:22:24.410 }, 00:22:24.410 { 00:22:24.410 "method": "nvmf_set_crdt", 00:22:24.410 "params": { 00:22:24.410 "crdt1": 0, 00:22:24.410 "crdt2": 0, 00:22:24.410 "crdt3": 0 00:22:24.410 } 00:22:24.410 }, 00:22:24.410 { 00:22:24.410 "method": "nvmf_create_transport", 00:22:24.410 "params": { 00:22:24.410 "trtype": "TCP", 00:22:24.410 "max_queue_depth": 128, 00:22:24.410 "max_io_qpairs_per_ctrlr": 127, 00:22:24.410 "in_capsule_data_size": 4096, 00:22:24.410 "max_io_size": 131072, 00:22:24.410 "io_unit_size": 131072, 00:22:24.410 "max_aq_depth": 128, 00:22:24.410 "num_shared_buffers": 511, 00:22:24.410 "buf_cache_size": 4294967295, 00:22:24.410 "dif_insert_or_strip": false, 00:22:24.410 "zcopy": false, 00:22:24.410 "c2h_success": false, 00:22:24.410 "sock_priority": 0, 00:22:24.410 "abort_timeout_sec": 1, 00:22:24.410 "ack_timeout": 0, 00:22:24.410 "data_wr_pool_size": 0 00:22:24.410 } 00:22:24.410 }, 00:22:24.410 { 00:22:24.410 "method": "nvmf_create_subsystem", 00:22:24.410 "params": { 00:22:24.410 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.410 "allow_any_host": false, 00:22:24.410 "serial_number": "SPDK00000000000001", 00:22:24.410 "model_number": "SPDK bdev Controller", 00:22:24.410 "max_namespaces": 10, 00:22:24.410 "min_cntlid": 1, 00:22:24.410 "max_cntlid": 65519, 00:22:24.410 "ana_reporting": false 00:22:24.410 } 00:22:24.410 }, 00:22:24.410 { 00:22:24.410 "method": "nvmf_subsystem_add_host", 00:22:24.410 "params": { 00:22:24.410 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.410 "host": "nqn.2016-06.io.spdk:host1", 00:22:24.410 "psk": "/tmp/tmp.Void7Inglv" 00:22:24.410 } 00:22:24.410 }, 00:22:24.410 { 00:22:24.410 "method": "nvmf_subsystem_add_ns", 00:22:24.410 "params": { 00:22:24.410 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.410 "namespace": { 00:22:24.410 "nsid": 1, 00:22:24.410 "bdev_name": "malloc0", 00:22:24.410 "nguid": "73CB0589BE86400D9FE3AE0FFFB80031", 00:22:24.410 "uuid": "73cb0589-be86-400d-9fe3-ae0fffb80031", 00:22:24.410 "no_auto_visible": false 00:22:24.410 } 00:22:24.410 } 00:22:24.410 }, 00:22:24.410 { 00:22:24.410 "method": "nvmf_subsystem_add_listener", 00:22:24.410 "params": { 00:22:24.410 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.410 "listen_address": { 00:22:24.410 "trtype": "TCP", 00:22:24.410 "adrfam": "IPv4", 00:22:24.410 "traddr": "10.0.0.2", 00:22:24.410 "trsvcid": "4420" 00:22:24.410 }, 00:22:24.410 "secure_channel": true 00:22:24.410 } 00:22:24.410 } 00:22:24.410 ] 00:22:24.410 } 00:22:24.410 ] 00:22:24.410 }' 00:22:24.410 09:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:24.672 09:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:24.672 "subsystems": [ 00:22:24.672 { 00:22:24.672 "subsystem": "keyring", 00:22:24.672 "config": [] 00:22:24.672 }, 00:22:24.672 { 00:22:24.672 "subsystem": "iobuf", 00:22:24.672 "config": [ 00:22:24.672 { 00:22:24.672 "method": "iobuf_set_options", 00:22:24.672 "params": { 00:22:24.672 "small_pool_count": 8192, 00:22:24.672 "large_pool_count": 1024, 00:22:24.672 "small_bufsize": 8192, 00:22:24.672 "large_bufsize": 135168 00:22:24.672 } 00:22:24.672 } 00:22:24.672 ] 00:22:24.672 }, 00:22:24.672 { 00:22:24.672 "subsystem": "sock", 00:22:24.672 "config": [ 00:22:24.672 { 00:22:24.672 "method": "sock_set_default_impl", 00:22:24.672 "params": { 00:22:24.672 "impl_name": "posix" 00:22:24.672 } 00:22:24.672 }, 00:22:24.672 { 00:22:24.672 "method": "sock_impl_set_options", 00:22:24.672 "params": { 00:22:24.672 "impl_name": "ssl", 00:22:24.672 "recv_buf_size": 4096, 00:22:24.672 "send_buf_size": 4096, 00:22:24.672 "enable_recv_pipe": true, 00:22:24.672 "enable_quickack": false, 00:22:24.672 "enable_placement_id": 0, 00:22:24.672 "enable_zerocopy_send_server": true, 00:22:24.672 "enable_zerocopy_send_client": false, 00:22:24.672 "zerocopy_threshold": 0, 00:22:24.672 "tls_version": 0, 00:22:24.672 "enable_ktls": false 00:22:24.672 } 00:22:24.672 }, 00:22:24.672 { 00:22:24.672 "method": "sock_impl_set_options", 00:22:24.672 "params": { 00:22:24.672 "impl_name": "posix", 00:22:24.672 "recv_buf_size": 2097152, 00:22:24.672 "send_buf_size": 2097152, 00:22:24.672 "enable_recv_pipe": true, 00:22:24.672 "enable_quickack": false, 00:22:24.672 "enable_placement_id": 0, 00:22:24.672 "enable_zerocopy_send_server": true, 00:22:24.672 "enable_zerocopy_send_client": false, 00:22:24.672 "zerocopy_threshold": 0, 00:22:24.672 "tls_version": 0, 00:22:24.672 "enable_ktls": false 00:22:24.672 } 00:22:24.672 } 00:22:24.672 ] 00:22:24.672 }, 00:22:24.672 { 00:22:24.672 "subsystem": "vmd", 00:22:24.672 "config": [] 00:22:24.672 }, 00:22:24.672 { 00:22:24.672 "subsystem": "accel", 00:22:24.672 "config": [ 00:22:24.672 { 00:22:24.672 "method": "accel_set_options", 00:22:24.672 "params": { 00:22:24.672 "small_cache_size": 128, 00:22:24.672 "large_cache_size": 16, 00:22:24.672 "task_count": 2048, 00:22:24.672 "sequence_count": 2048, 00:22:24.672 "buf_count": 2048 00:22:24.672 } 00:22:24.672 } 00:22:24.672 ] 00:22:24.672 }, 00:22:24.672 { 00:22:24.672 "subsystem": "bdev", 00:22:24.672 "config": [ 00:22:24.672 { 00:22:24.672 "method": "bdev_set_options", 00:22:24.672 "params": { 00:22:24.672 "bdev_io_pool_size": 65535, 00:22:24.672 "bdev_io_cache_size": 256, 00:22:24.672 "bdev_auto_examine": true, 00:22:24.672 "iobuf_small_cache_size": 128, 00:22:24.672 "iobuf_large_cache_size": 16 00:22:24.672 } 00:22:24.672 }, 00:22:24.672 { 00:22:24.672 "method": "bdev_raid_set_options", 00:22:24.672 "params": { 00:22:24.672 "process_window_size_kb": 1024 00:22:24.672 } 00:22:24.672 }, 00:22:24.672 { 00:22:24.672 "method": "bdev_iscsi_set_options", 00:22:24.672 "params": { 00:22:24.672 "timeout_sec": 30 00:22:24.672 } 00:22:24.672 }, 00:22:24.672 { 00:22:24.672 "method": "bdev_nvme_set_options", 00:22:24.672 "params": { 00:22:24.672 "action_on_timeout": "none", 00:22:24.672 "timeout_us": 0, 00:22:24.672 "timeout_admin_us": 0, 00:22:24.672 "keep_alive_timeout_ms": 10000, 00:22:24.672 "arbitration_burst": 0, 00:22:24.672 "low_priority_weight": 0, 00:22:24.672 "medium_priority_weight": 0, 00:22:24.672 "high_priority_weight": 0, 00:22:24.672 "nvme_adminq_poll_period_us": 10000, 00:22:24.672 "nvme_ioq_poll_period_us": 0, 00:22:24.672 "io_queue_requests": 512, 00:22:24.672 "delay_cmd_submit": true, 00:22:24.672 "transport_retry_count": 4, 00:22:24.672 "bdev_retry_count": 3, 00:22:24.672 "transport_ack_timeout": 0, 00:22:24.672 "ctrlr_loss_timeout_sec": 0, 00:22:24.672 "reconnect_delay_sec": 0, 00:22:24.672 "fast_io_fail_timeout_sec": 0, 00:22:24.672 "disable_auto_failback": false, 00:22:24.672 "generate_uuids": false, 00:22:24.672 "transport_tos": 0, 00:22:24.672 "nvme_error_stat": false, 00:22:24.672 "rdma_srq_size": 0, 00:22:24.672 "io_path_stat": false, 00:22:24.672 "allow_accel_sequence": false, 00:22:24.672 "rdma_max_cq_size": 0, 00:22:24.672 "rdma_cm_event_timeout_ms": 0, 00:22:24.672 "dhchap_digests": [ 00:22:24.672 "sha256", 00:22:24.672 "sha384", 00:22:24.672 "sha512" 00:22:24.672 ], 00:22:24.672 "dhchap_dhgroups": [ 00:22:24.672 "null", 00:22:24.672 "ffdhe2048", 00:22:24.672 "ffdhe3072", 00:22:24.672 "ffdhe4096", 00:22:24.672 "ffdhe6144", 00:22:24.672 "ffdhe8192" 00:22:24.672 ] 00:22:24.672 } 00:22:24.672 }, 00:22:24.672 { 00:22:24.672 "method": "bdev_nvme_attach_controller", 00:22:24.672 "params": { 00:22:24.672 "name": "TLSTEST", 00:22:24.672 "trtype": "TCP", 00:22:24.672 "adrfam": "IPv4", 00:22:24.672 "traddr": "10.0.0.2", 00:22:24.672 "trsvcid": "4420", 00:22:24.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.672 "prchk_reftag": false, 00:22:24.672 "prchk_guard": false, 00:22:24.672 "ctrlr_loss_timeout_sec": 0, 00:22:24.672 "reconnect_delay_sec": 0, 00:22:24.672 "fast_io_fail_timeout_sec": 0, 00:22:24.672 "psk": "/tmp/tmp.Void7Inglv", 00:22:24.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:24.672 "hdgst": false, 00:22:24.672 "ddgst": false 00:22:24.672 } 00:22:24.672 }, 00:22:24.672 { 00:22:24.673 "method": "bdev_nvme_set_hotplug", 00:22:24.673 "params": { 00:22:24.673 "period_us": 100000, 00:22:24.673 "enable": false 00:22:24.673 } 00:22:24.673 }, 00:22:24.673 { 00:22:24.673 "method": "bdev_wait_for_examine" 00:22:24.673 } 00:22:24.673 ] 00:22:24.673 }, 00:22:24.673 { 00:22:24.673 "subsystem": "nbd", 00:22:24.673 "config": [] 00:22:24.673 } 00:22:24.673 ] 00:22:24.673 }' 00:22:24.673 09:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 742130 00:22:24.673 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 742130 ']' 00:22:24.673 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 742130 00:22:24.673 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:24.673 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:24.673 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 742130 00:22:24.673 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:24.673 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:24.673 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 742130' 00:22:24.673 killing process with pid 742130 00:22:24.673 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 742130 00:22:24.673 Received shutdown signal, test time was about 10.000000 seconds 00:22:24.673 00:22:24.673 Latency(us) 00:22:24.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.673 =================================================================================================================== 00:22:24.673 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:24.673 [2024-07-15 09:31:11.776501] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:24.673 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 742130 00:22:24.935 09:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 741769 00:22:24.935 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 741769 ']' 00:22:24.935 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 741769 00:22:24.935 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:24.935 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:24.935 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 741769 00:22:24.935 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:24.935 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:24.935 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 741769' 00:22:24.935 killing process with pid 741769 00:22:24.935 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 741769 00:22:24.935 [2024-07-15 09:31:11.943966] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:24.935 09:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 741769 00:22:24.935 09:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:24.935 "subsystems": [ 00:22:24.935 { 00:22:24.935 "subsystem": "keyring", 00:22:24.935 "config": [] 00:22:24.935 }, 00:22:24.935 { 00:22:24.935 "subsystem": "iobuf", 00:22:24.935 "config": [ 00:22:24.935 { 00:22:24.935 "method": "iobuf_set_options", 00:22:24.935 "params": { 00:22:24.935 "small_pool_count": 8192, 00:22:24.935 "large_pool_count": 1024, 00:22:24.935 "small_bufsize": 8192, 00:22:24.935 "large_bufsize": 135168 00:22:24.935 } 00:22:24.935 } 00:22:24.935 ] 00:22:24.935 }, 00:22:24.935 { 00:22:24.935 "subsystem": "sock", 00:22:24.935 "config": [ 00:22:24.935 { 00:22:24.935 "method": "sock_set_default_impl", 00:22:24.935 "params": { 00:22:24.935 "impl_name": "posix" 00:22:24.935 } 00:22:24.935 }, 00:22:24.935 { 00:22:24.935 "method": "sock_impl_set_options", 00:22:24.935 "params": { 00:22:24.935 "impl_name": "ssl", 00:22:24.935 "recv_buf_size": 4096, 00:22:24.935 "send_buf_size": 4096, 00:22:24.935 "enable_recv_pipe": true, 00:22:24.935 "enable_quickack": false, 00:22:24.935 "enable_placement_id": 0, 00:22:24.935 "enable_zerocopy_send_server": true, 00:22:24.935 "enable_zerocopy_send_client": false, 00:22:24.935 "zerocopy_threshold": 0, 00:22:24.935 "tls_version": 0, 00:22:24.935 "enable_ktls": false 00:22:24.935 } 00:22:24.935 }, 00:22:24.935 { 00:22:24.935 "method": "sock_impl_set_options", 00:22:24.935 "params": { 00:22:24.935 "impl_name": "posix", 00:22:24.935 "recv_buf_size": 2097152, 00:22:24.935 "send_buf_size": 2097152, 00:22:24.935 "enable_recv_pipe": true, 00:22:24.935 "enable_quickack": false, 00:22:24.935 "enable_placement_id": 0, 00:22:24.935 "enable_zerocopy_send_server": true, 00:22:24.935 "enable_zerocopy_send_client": false, 00:22:24.935 "zerocopy_threshold": 0, 00:22:24.935 "tls_version": 0, 00:22:24.935 "enable_ktls": false 00:22:24.935 } 00:22:24.935 } 00:22:24.935 ] 00:22:24.935 }, 00:22:24.935 { 00:22:24.935 "subsystem": "vmd", 00:22:24.935 "config": [] 00:22:24.935 }, 00:22:24.935 { 00:22:24.935 "subsystem": "accel", 00:22:24.935 "config": [ 00:22:24.935 { 00:22:24.935 "method": "accel_set_options", 00:22:24.935 "params": { 00:22:24.935 "small_cache_size": 128, 00:22:24.935 "large_cache_size": 16, 00:22:24.935 "task_count": 2048, 00:22:24.935 "sequence_count": 2048, 00:22:24.935 "buf_count": 2048 00:22:24.935 } 00:22:24.935 } 00:22:24.935 ] 00:22:24.935 }, 00:22:24.935 { 00:22:24.935 "subsystem": "bdev", 00:22:24.935 "config": [ 00:22:24.935 { 00:22:24.935 "method": "bdev_set_options", 00:22:24.935 "params": { 00:22:24.935 "bdev_io_pool_size": 65535, 00:22:24.935 "bdev_io_cache_size": 256, 00:22:24.935 "bdev_auto_examine": true, 00:22:24.935 "iobuf_small_cache_size": 128, 00:22:24.935 "iobuf_large_cache_size": 16 00:22:24.936 } 00:22:24.936 }, 00:22:24.936 { 00:22:24.936 "method": "bdev_raid_set_options", 00:22:24.936 "params": { 00:22:24.936 "process_window_size_kb": 1024 00:22:24.936 } 00:22:24.936 }, 00:22:24.936 { 00:22:24.936 "method": "bdev_iscsi_set_options", 00:22:24.936 "params": { 00:22:24.936 "timeout_sec": 30 00:22:24.936 } 00:22:24.936 }, 00:22:24.936 { 00:22:24.936 "method": "bdev_nvme_set_options", 00:22:24.936 "params": { 00:22:24.936 "action_on_timeout": "none", 00:22:24.936 "timeout_us": 0, 00:22:24.936 "timeout_admin_us": 0, 00:22:24.936 "keep_alive_timeout_ms": 10000, 00:22:24.936 "arbitration_burst": 0, 00:22:24.936 "low_priority_weight": 0, 00:22:24.936 "medium_priority_weight": 0, 00:22:24.936 "high_priority_weight": 0, 00:22:24.936 "nvme_adminq_poll_period_us": 10000, 00:22:24.936 "nvme_ioq_poll_period_us": 0, 00:22:24.936 "io_queue_requests": 0, 00:22:24.936 "delay_cmd_submit": true, 00:22:24.936 "transport_retry_count": 4, 00:22:24.936 "bdev_retry_count": 3, 00:22:24.936 "transport_ack_timeout": 0, 00:22:24.936 "ctrlr_loss_timeout_sec": 0, 00:22:24.936 "reconnect_delay_sec": 0, 00:22:24.936 "fast_io_fail_timeout_sec": 0, 00:22:24.936 "disable_auto_failback": false, 00:22:24.936 "generate_uuids": false, 00:22:24.936 "transport_tos": 0, 00:22:24.936 "nvme_error_stat": false, 00:22:24.936 "rdma_srq_size": 0, 00:22:24.936 "io_path_stat": false, 00:22:24.936 "allow_accel_sequence": false, 00:22:24.936 "rdma_max_cq_size": 0, 00:22:24.936 "rdma_cm_event_timeout_ms": 0, 00:22:24.936 "dhchap_digests": [ 00:22:24.936 "sha256", 00:22:24.936 "sha384", 00:22:24.936 "sha512" 00:22:24.936 ], 00:22:24.936 "dhchap_dhgroups": [ 00:22:24.936 "null", 00:22:24.936 "ffdhe2048", 00:22:24.936 "ffdhe3072", 00:22:24.936 "ffdhe4096", 00:22:24.936 "ffdhe6144", 00:22:24.936 "ffdhe8192" 00:22:24.936 ] 00:22:24.936 } 00:22:24.936 }, 00:22:24.936 { 00:22:24.936 "method": "bdev_nvme_set_hotplug", 00:22:24.936 "params": { 00:22:24.936 "period_us": 100000, 00:22:24.936 "enable": false 00:22:24.936 } 00:22:24.936 }, 00:22:24.936 { 00:22:24.936 "method": "bdev_malloc_create", 00:22:24.936 "params": { 00:22:24.936 "name": "malloc0", 00:22:24.936 "num_blocks": 8192, 00:22:24.936 "block_size": 4096, 00:22:24.936 "physical_block_size": 4096, 00:22:24.936 "uuid": "73cb0589-be86-400d-9fe3-ae0fffb80031", 00:22:24.936 "optimal_io_boundary": 0 00:22:24.936 } 00:22:24.936 }, 00:22:24.936 { 00:22:24.936 "method": "bdev_wait_for_examine" 00:22:24.936 } 00:22:24.936 ] 00:22:24.936 }, 00:22:24.936 { 00:22:24.936 "subsystem": "nbd", 00:22:24.936 "config": [] 00:22:24.936 }, 00:22:24.936 { 00:22:24.936 "subsystem": "scheduler", 00:22:24.936 "config": [ 00:22:24.936 { 00:22:24.936 "method": "framework_set_scheduler", 00:22:24.936 "params": { 00:22:24.936 "name": "static" 00:22:24.936 } 00:22:24.936 } 00:22:24.936 ] 00:22:24.936 }, 00:22:24.936 { 00:22:24.936 "subsystem": "nvmf", 00:22:24.936 "config": [ 00:22:24.936 { 00:22:24.936 "method": "nvmf_set_config", 00:22:24.936 "params": { 00:22:24.936 "discovery_filter": "match_any", 00:22:24.936 "admin_cmd_passthru": { 00:22:24.936 "identify_ctrlr": false 00:22:24.936 } 00:22:24.936 } 00:22:24.936 }, 00:22:24.936 { 00:22:24.936 "method": "nvmf_set_max_subsystems", 00:22:24.936 "params": { 00:22:24.936 "max_subsystems": 1024 00:22:24.936 } 00:22:24.936 }, 00:22:24.936 { 00:22:24.936 "method": "nvmf_set_crdt", 00:22:24.936 "params": { 00:22:24.936 "crdt1": 0, 00:22:24.936 "crdt2": 0, 00:22:24.936 "crdt3": 0 00:22:24.936 } 00:22:24.936 }, 00:22:24.936 { 00:22:24.936 "method": "nvmf_create_transport", 00:22:24.936 "params": { 00:22:24.936 "trtype": "TCP", 00:22:24.936 "max_queue_depth": 128, 00:22:24.936 "max_io_qpairs_per_ctrlr": 127, 00:22:24.936 "in_capsule_data_size": 4096, 00:22:24.936 "max_io_size": 131072, 00:22:24.936 "io_unit_size": 131072, 00:22:24.936 "max_aq_depth": 128, 00:22:24.936 "num_shared_buffers": 511, 00:22:24.936 "buf_cache_size": 4294967295, 00:22:24.936 "dif_insert_or_strip": false, 00:22:24.936 "zcopy": false, 00:22:24.936 "c2h_success": false, 00:22:24.936 "sock_priority": 0, 00:22:24.936 "abort_timeout_sec": 1, 00:22:24.936 "ack_timeout": 0, 00:22:24.936 "data_wr_pool_size": 0 00:22:24.936 } 00:22:24.936 }, 00:22:24.936 { 00:22:24.936 "method": "nvmf_create_subsystem", 00:22:24.936 "params": { 00:22:24.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.936 "allow_any_host": false, 00:22:24.936 "serial_number": "SPDK00000000000001", 00:22:24.936 "model_number": "SPDK bdev Controller", 00:22:24.936 "max_namespaces": 10, 00:22:24.936 "min_cntlid": 1, 00:22:24.936 "max_cntlid": 65519, 00:22:24.936 "ana_reporting": false 00:22:24.936 } 00:22:24.936 }, 00:22:24.936 { 00:22:24.936 "method": "nvmf_subsystem_add_host", 00:22:24.936 "params": { 00:22:24.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.936 "host": "nqn.2016-06.io.spdk:host1", 00:22:24.936 "psk": "/tmp/tmp.Void7Inglv" 00:22:24.936 } 00:22:24.936 }, 00:22:24.936 { 00:22:24.936 "method": "nvmf_subsystem_add_ns", 00:22:24.936 "params": { 00:22:24.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.936 "namespace": { 00:22:24.936 "nsid": 1, 00:22:24.936 "bdev_name": "malloc0", 00:22:24.936 "nguid": "73CB0589BE86400D9FE3AE0FFFB80031", 00:22:24.936 "uuid": "73cb0589-be86-400d-9fe3-ae0fffb80031", 00:22:24.936 "no_auto_visible": false 00:22:24.936 } 00:22:24.936 } 00:22:24.936 }, 00:22:24.936 { 00:22:24.936 "method": "nvmf_subsystem_add_listener", 00:22:24.936 "params": { 00:22:24.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.936 "listen_address": { 00:22:24.936 "trtype": "TCP", 00:22:24.936 "adrfam": "IPv4", 00:22:24.936 "traddr": "10.0.0.2", 00:22:24.936 "trsvcid": "4420" 00:22:24.936 }, 00:22:24.936 "secure_channel": true 00:22:24.936 } 00:22:24.936 } 00:22:24.936 ] 00:22:24.936 } 00:22:24.936 ] 00:22:24.936 }' 00:22:24.936 09:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:24.936 09:31:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:24.936 09:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:24.936 09:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.936 09:31:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=742438 00:22:24.936 09:31:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 742438 00:22:24.936 09:31:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:24.936 09:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 742438 ']' 00:22:24.936 09:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.936 09:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:24.936 09:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.936 09:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:24.936 09:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.198 [2024-07-15 09:31:12.135413] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:25.198 [2024-07-15 09:31:12.135477] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.198 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.199 [2024-07-15 09:31:12.221879] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.199 [2024-07-15 09:31:12.276206] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.199 [2024-07-15 09:31:12.276238] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.199 [2024-07-15 09:31:12.276244] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.199 [2024-07-15 09:31:12.276248] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.199 [2024-07-15 09:31:12.276252] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.199 [2024-07-15 09:31:12.276295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.460 [2024-07-15 09:31:12.458930] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.460 [2024-07-15 09:31:12.474909] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:25.460 [2024-07-15 09:31:12.490954] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:25.460 [2024-07-15 09:31:12.501024] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.722 09:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:25.723 09:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:25.723 09:31:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:25.723 09:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:25.723 09:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.723 09:31:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.723 09:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=742508 00:22:25.723 09:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 742508 /var/tmp/bdevperf.sock 00:22:25.723 09:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 742508 ']' 00:22:25.723 09:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.723 09:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:25.723 09:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.723 09:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:25.723 09:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:25.723 09:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.723 09:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:25.723 "subsystems": [ 00:22:25.723 { 00:22:25.723 "subsystem": "keyring", 00:22:25.723 "config": [] 00:22:25.723 }, 00:22:25.723 { 00:22:25.723 "subsystem": "iobuf", 00:22:25.723 "config": [ 00:22:25.723 { 00:22:25.723 "method": "iobuf_set_options", 00:22:25.723 "params": { 00:22:25.723 "small_pool_count": 8192, 00:22:25.723 "large_pool_count": 1024, 00:22:25.723 "small_bufsize": 8192, 00:22:25.723 "large_bufsize": 135168 00:22:25.723 } 00:22:25.723 } 00:22:25.723 ] 00:22:25.723 }, 00:22:25.723 { 00:22:25.723 "subsystem": "sock", 00:22:25.723 "config": [ 00:22:25.723 { 00:22:25.723 "method": "sock_set_default_impl", 00:22:25.723 "params": { 00:22:25.723 "impl_name": "posix" 00:22:25.723 } 00:22:25.723 }, 00:22:25.723 { 00:22:25.723 "method": "sock_impl_set_options", 00:22:25.723 "params": { 00:22:25.723 "impl_name": "ssl", 00:22:25.723 "recv_buf_size": 4096, 00:22:25.723 "send_buf_size": 4096, 00:22:25.723 "enable_recv_pipe": true, 00:22:25.723 "enable_quickack": false, 00:22:25.723 "enable_placement_id": 0, 00:22:25.723 "enable_zerocopy_send_server": true, 00:22:25.723 "enable_zerocopy_send_client": false, 00:22:25.723 "zerocopy_threshold": 0, 00:22:25.723 "tls_version": 0, 00:22:25.723 "enable_ktls": false 00:22:25.723 } 00:22:25.723 }, 00:22:25.723 { 00:22:25.723 "method": "sock_impl_set_options", 00:22:25.723 "params": { 00:22:25.723 "impl_name": "posix", 00:22:25.723 "recv_buf_size": 2097152, 00:22:25.723 "send_buf_size": 2097152, 00:22:25.723 "enable_recv_pipe": true, 00:22:25.723 "enable_quickack": false, 00:22:25.723 "enable_placement_id": 0, 00:22:25.723 "enable_zerocopy_send_server": true, 00:22:25.723 "enable_zerocopy_send_client": false, 00:22:25.723 "zerocopy_threshold": 0, 00:22:25.723 "tls_version": 0, 00:22:25.723 "enable_ktls": false 00:22:25.723 } 00:22:25.723 } 00:22:25.723 ] 00:22:25.723 }, 00:22:25.723 { 00:22:25.723 "subsystem": "vmd", 00:22:25.723 "config": [] 00:22:25.723 }, 00:22:25.723 { 00:22:25.723 "subsystem": "accel", 00:22:25.723 "config": [ 00:22:25.723 { 00:22:25.723 "method": "accel_set_options", 00:22:25.723 "params": { 00:22:25.723 "small_cache_size": 128, 00:22:25.723 "large_cache_size": 16, 00:22:25.723 "task_count": 2048, 00:22:25.723 "sequence_count": 2048, 00:22:25.723 "buf_count": 2048 00:22:25.723 } 00:22:25.723 } 00:22:25.723 ] 00:22:25.723 }, 00:22:25.723 { 00:22:25.723 "subsystem": "bdev", 00:22:25.723 "config": [ 00:22:25.723 { 00:22:25.723 "method": "bdev_set_options", 00:22:25.723 "params": { 00:22:25.723 "bdev_io_pool_size": 65535, 00:22:25.723 "bdev_io_cache_size": 256, 00:22:25.723 "bdev_auto_examine": true, 00:22:25.723 "iobuf_small_cache_size": 128, 00:22:25.723 "iobuf_large_cache_size": 16 00:22:25.723 } 00:22:25.723 }, 00:22:25.723 { 00:22:25.723 "method": "bdev_raid_set_options", 00:22:25.723 "params": { 00:22:25.723 "process_window_size_kb": 1024 00:22:25.723 } 00:22:25.723 }, 00:22:25.723 { 00:22:25.723 "method": "bdev_iscsi_set_options", 00:22:25.723 "params": { 00:22:25.723 "timeout_sec": 30 00:22:25.723 } 00:22:25.723 }, 00:22:25.723 { 00:22:25.723 "method": "bdev_nvme_set_options", 00:22:25.723 "params": { 00:22:25.723 "action_on_timeout": "none", 00:22:25.723 "timeout_us": 0, 00:22:25.723 "timeout_admin_us": 0, 00:22:25.723 "keep_alive_timeout_ms": 10000, 00:22:25.723 "arbitration_burst": 0, 00:22:25.723 "low_priority_weight": 0, 00:22:25.723 "medium_priority_weight": 0, 00:22:25.723 "high_priority_weight": 0, 00:22:25.723 "nvme_adminq_poll_period_us": 10000, 00:22:25.723 "nvme_ioq_poll_period_us": 0, 00:22:25.723 "io_queue_requests": 512, 00:22:25.723 "delay_cmd_submit": true, 00:22:25.723 "transport_retry_count": 4, 00:22:25.723 "bdev_retry_count": 3, 00:22:25.723 "transport_ack_timeout": 0, 00:22:25.723 "ctrlr_loss_timeout_sec": 0, 00:22:25.723 "reconnect_delay_sec": 0, 00:22:25.723 "fast_io_fail_timeout_sec": 0, 00:22:25.723 "disable_auto_failback": false, 00:22:25.723 "generate_uuids": false, 00:22:25.723 "transport_tos": 0, 00:22:25.723 "nvme_error_stat": false, 00:22:25.723 "rdma_srq_size": 0, 00:22:25.723 "io_path_stat": false, 00:22:25.723 "allow_accel_sequence": false, 00:22:25.723 "rdma_max_cq_size": 0, 00:22:25.723 "rdma_cm_event_timeout_ms": 0, 00:22:25.723 "dhchap_digests": [ 00:22:25.723 "sha256", 00:22:25.723 "sha384", 00:22:25.723 "sha512" 00:22:25.723 ], 00:22:25.723 "dhchap_dhgroups": [ 00:22:25.723 "null", 00:22:25.723 "ffdhe2048", 00:22:25.723 "ffdhe3072", 00:22:25.723 "ffdhe4096", 00:22:25.723 "ffdhe6144", 00:22:25.723 "ffdhe8192" 00:22:25.723 ] 00:22:25.723 } 00:22:25.723 }, 00:22:25.723 { 00:22:25.723 "method": "bdev_nvme_attach_controller", 00:22:25.723 "params": { 00:22:25.723 "name": "TLSTEST", 00:22:25.723 "trtype": "TCP", 00:22:25.723 "adrfam": "IPv4", 00:22:25.723 "traddr": "10.0.0.2", 00:22:25.723 "trsvcid": "4420", 00:22:25.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.723 "prchk_reftag": false, 00:22:25.723 "prchk_guard": false, 00:22:25.723 "ctrlr_loss_timeout_sec": 0, 00:22:25.723 "reconnect_delay_sec": 0, 00:22:25.723 "fast_io_fail_timeout_sec": 0, 00:22:25.723 "psk": "/tmp/tmp.Void7Inglv", 00:22:25.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:25.723 "hdgst": false, 00:22:25.723 "ddgst": false 00:22:25.723 } 00:22:25.723 }, 00:22:25.723 { 00:22:25.723 "method": "bdev_nvme_set_hotplug", 00:22:25.723 "params": { 00:22:25.723 "period_us": 100000, 00:22:25.723 "enable": false 00:22:25.723 } 00:22:25.723 }, 00:22:25.723 { 00:22:25.723 "method": "bdev_wait_for_examine" 00:22:25.723 } 00:22:25.723 ] 00:22:25.723 }, 00:22:25.723 { 00:22:25.723 "subsystem": "nbd", 00:22:25.723 "config": [] 00:22:25.723 } 00:22:25.723 ] 00:22:25.723 }' 00:22:25.984 [2024-07-15 09:31:12.944216] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:25.984 [2024-07-15 09:31:12.944267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid742508 ] 00:22:25.984 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.984 [2024-07-15 09:31:12.999158] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.984 [2024-07-15 09:31:13.051693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.984 [2024-07-15 09:31:13.175642] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:25.984 [2024-07-15 09:31:13.175708] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:26.551 09:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:26.551 09:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:26.551 09:31:13 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:26.810 Running I/O for 10 seconds... 00:22:36.799 00:22:36.799 Latency(us) 00:22:36.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.799 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:36.799 Verification LBA range: start 0x0 length 0x2000 00:22:36.799 TLSTESTn1 : 10.02 5993.71 23.41 0.00 0.00 21317.82 4423.68 45438.29 00:22:36.799 =================================================================================================================== 00:22:36.799 Total : 5993.71 23.41 0.00 0.00 21317.82 4423.68 45438.29 00:22:36.799 0 00:22:36.799 09:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:36.799 09:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 742508 00:22:36.799 09:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 742508 ']' 00:22:36.799 09:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 742508 00:22:36.799 09:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:36.799 09:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:36.799 09:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 742508 00:22:36.799 09:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:36.799 09:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:36.799 09:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 742508' 00:22:36.799 killing process with pid 742508 00:22:36.799 09:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 742508 00:22:36.799 Received shutdown signal, test time was about 10.000000 seconds 00:22:36.799 00:22:36.799 Latency(us) 00:22:36.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.799 =================================================================================================================== 00:22:36.799 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:36.799 [2024-07-15 09:31:23.907023] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:36.799 09:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 742508 00:22:37.060 09:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 742438 00:22:37.060 09:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 742438 ']' 00:22:37.060 09:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 742438 00:22:37.060 09:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:37.060 09:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:37.060 09:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 742438 00:22:37.060 09:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:37.061 09:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:37.061 09:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 742438' 00:22:37.061 killing process with pid 742438 00:22:37.061 09:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 742438 00:22:37.061 [2024-07-15 09:31:24.076822] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:37.061 09:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 742438 00:22:37.061 09:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:37.061 09:31:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:37.061 09:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:37.061 09:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.061 09:31:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=744811 00:22:37.061 09:31:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 744811 00:22:37.061 09:31:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:37.061 09:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 744811 ']' 00:22:37.061 09:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.061 09:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:37.061 09:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.061 09:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:37.061 09:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.322 [2024-07-15 09:31:24.263201] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:37.322 [2024-07-15 09:31:24.263265] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.322 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.322 [2024-07-15 09:31:24.335292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.322 [2024-07-15 09:31:24.400767] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.322 [2024-07-15 09:31:24.400805] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.322 [2024-07-15 09:31:24.400812] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.322 [2024-07-15 09:31:24.400819] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.322 [2024-07-15 09:31:24.400824] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.322 [2024-07-15 09:31:24.400846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.892 09:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:37.892 09:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:37.892 09:31:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:37.892 09:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:37.892 09:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.892 09:31:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.892 09:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Void7Inglv 00:22:37.892 09:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Void7Inglv 00:22:37.892 09:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:38.152 [2024-07-15 09:31:25.203345] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.152 09:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:38.413 09:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:38.413 [2024-07-15 09:31:25.540182] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:38.413 [2024-07-15 09:31:25.540355] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.413 09:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:38.673 malloc0 00:22:38.673 09:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:38.934 09:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Void7Inglv 00:22:38.934 [2024-07-15 09:31:26.024200] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:38.934 09:31:26 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:38.934 09:31:26 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=745209 00:22:38.934 09:31:26 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:38.934 09:31:26 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 745209 /var/tmp/bdevperf.sock 00:22:38.934 09:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 745209 ']' 00:22:38.934 09:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.934 09:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.934 09:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.934 09:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.934 09:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.934 [2024-07-15 09:31:26.094242] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:38.934 [2024-07-15 09:31:26.094294] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid745209 ] 00:22:38.934 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.193 [2024-07-15 09:31:26.175959] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.193 [2024-07-15 09:31:26.229622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.761 09:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.761 09:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:39.761 09:31:26 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Void7Inglv 00:22:40.021 09:31:27 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:40.021 [2024-07-15 09:31:27.159179] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:40.281 nvme0n1 00:22:40.281 09:31:27 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:40.281 Running I/O for 1 seconds... 00:22:41.222 00:22:41.222 Latency(us) 00:22:41.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.222 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:41.222 Verification LBA range: start 0x0 length 0x2000 00:22:41.222 nvme0n1 : 1.04 4059.04 15.86 0.00 0.00 31232.82 4833.28 54613.33 00:22:41.222 =================================================================================================================== 00:22:41.222 Total : 4059.04 15.86 0.00 0.00 31232.82 4833.28 54613.33 00:22:41.222 0 00:22:41.222 09:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 745209 00:22:41.222 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 745209 ']' 00:22:41.222 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 745209 00:22:41.222 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:41.222 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:41.222 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 745209 00:22:41.483 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:41.483 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:41.483 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 745209' 00:22:41.483 killing process with pid 745209 00:22:41.483 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 745209 00:22:41.483 Received shutdown signal, test time was about 1.000000 seconds 00:22:41.483 00:22:41.483 Latency(us) 00:22:41.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.483 =================================================================================================================== 00:22:41.483 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:41.483 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 745209 00:22:41.483 09:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 744811 00:22:41.483 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 744811 ']' 00:22:41.483 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 744811 00:22:41.483 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:41.483 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:41.483 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 744811 00:22:41.483 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:41.483 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:41.483 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 744811' 00:22:41.483 killing process with pid 744811 00:22:41.483 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 744811 00:22:41.483 [2024-07-15 09:31:28.613878] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:41.483 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 744811 00:22:41.744 09:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:22:41.744 09:31:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:41.744 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:41.744 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.744 09:31:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=745590 00:22:41.744 09:31:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 745590 00:22:41.744 09:31:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:41.744 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 745590 ']' 00:22:41.744 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.744 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:41.744 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.744 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:41.744 09:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.744 [2024-07-15 09:31:28.815646] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:41.744 [2024-07-15 09:31:28.815703] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.744 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.744 [2024-07-15 09:31:28.888674] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.004 [2024-07-15 09:31:28.953928] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.004 [2024-07-15 09:31:28.953967] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.004 [2024-07-15 09:31:28.953975] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.004 [2024-07-15 09:31:28.953981] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.004 [2024-07-15 09:31:28.953987] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.004 [2024-07-15 09:31:28.954006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.575 09:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:42.575 09:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:42.575 09:31:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:42.575 09:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:42.575 09:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:42.575 09:31:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.575 09:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:22:42.575 09:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.575 09:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:42.575 [2024-07-15 09:31:29.632224] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.575 malloc0 00:22:42.575 [2024-07-15 09:31:29.658962] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:42.575 [2024-07-15 09:31:29.659137] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.575 09:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.575 09:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=745914 00:22:42.575 09:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 745914 /var/tmp/bdevperf.sock 00:22:42.575 09:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:42.575 09:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 745914 ']' 00:22:42.575 09:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:42.575 09:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:42.575 09:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:42.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:42.575 09:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:42.575 09:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:42.575 [2024-07-15 09:31:29.734508] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:42.575 [2024-07-15 09:31:29.734552] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid745914 ] 00:22:42.575 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.835 [2024-07-15 09:31:29.815695] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.835 [2024-07-15 09:31:29.869686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.404 09:31:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:43.404 09:31:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:43.404 09:31:30 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Void7Inglv 00:22:43.664 09:31:30 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:43.664 [2024-07-15 09:31:30.786967] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:43.664 nvme0n1 00:22:43.925 09:31:30 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:43.925 Running I/O for 1 seconds... 00:22:44.865 00:22:44.865 Latency(us) 00:22:44.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.865 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:44.865 Verification LBA range: start 0x0 length 0x2000 00:22:44.865 nvme0n1 : 1.02 4017.48 15.69 0.00 0.00 31607.82 5515.95 99177.81 00:22:44.865 =================================================================================================================== 00:22:44.865 Total : 4017.48 15.69 0.00 0.00 31607.82 5515.95 99177.81 00:22:44.865 0 00:22:44.865 09:31:31 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:22:44.865 09:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.865 09:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.128 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.128 09:31:32 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:22:45.128 "subsystems": [ 00:22:45.128 { 00:22:45.128 "subsystem": "keyring", 00:22:45.128 "config": [ 00:22:45.128 { 00:22:45.128 "method": "keyring_file_add_key", 00:22:45.128 "params": { 00:22:45.128 "name": "key0", 00:22:45.128 "path": "/tmp/tmp.Void7Inglv" 00:22:45.128 } 00:22:45.128 } 00:22:45.128 ] 00:22:45.128 }, 00:22:45.128 { 00:22:45.128 "subsystem": "iobuf", 00:22:45.128 "config": [ 00:22:45.128 { 00:22:45.128 "method": "iobuf_set_options", 00:22:45.128 "params": { 00:22:45.128 "small_pool_count": 8192, 00:22:45.128 "large_pool_count": 1024, 00:22:45.128 "small_bufsize": 8192, 00:22:45.128 "large_bufsize": 135168 00:22:45.128 } 00:22:45.128 } 00:22:45.128 ] 00:22:45.128 }, 00:22:45.128 { 00:22:45.128 "subsystem": "sock", 00:22:45.128 "config": [ 00:22:45.128 { 00:22:45.128 "method": "sock_set_default_impl", 00:22:45.128 "params": { 00:22:45.128 "impl_name": "posix" 00:22:45.128 } 00:22:45.128 }, 00:22:45.128 { 00:22:45.128 "method": "sock_impl_set_options", 00:22:45.128 "params": { 00:22:45.129 "impl_name": "ssl", 00:22:45.129 "recv_buf_size": 4096, 00:22:45.129 "send_buf_size": 4096, 00:22:45.129 "enable_recv_pipe": true, 00:22:45.129 "enable_quickack": false, 00:22:45.129 "enable_placement_id": 0, 00:22:45.129 "enable_zerocopy_send_server": true, 00:22:45.129 "enable_zerocopy_send_client": false, 00:22:45.129 "zerocopy_threshold": 0, 00:22:45.129 "tls_version": 0, 00:22:45.129 "enable_ktls": false 00:22:45.129 } 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "method": "sock_impl_set_options", 00:22:45.129 "params": { 00:22:45.129 "impl_name": "posix", 00:22:45.129 "recv_buf_size": 2097152, 00:22:45.129 "send_buf_size": 2097152, 00:22:45.129 "enable_recv_pipe": true, 00:22:45.129 "enable_quickack": false, 00:22:45.129 "enable_placement_id": 0, 00:22:45.129 "enable_zerocopy_send_server": true, 00:22:45.129 "enable_zerocopy_send_client": false, 00:22:45.129 "zerocopy_threshold": 0, 00:22:45.129 "tls_version": 0, 00:22:45.129 "enable_ktls": false 00:22:45.129 } 00:22:45.129 } 00:22:45.129 ] 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "subsystem": "vmd", 00:22:45.129 "config": [] 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "subsystem": "accel", 00:22:45.129 "config": [ 00:22:45.129 { 00:22:45.129 "method": "accel_set_options", 00:22:45.129 "params": { 00:22:45.129 "small_cache_size": 128, 00:22:45.129 "large_cache_size": 16, 00:22:45.129 "task_count": 2048, 00:22:45.129 "sequence_count": 2048, 00:22:45.129 "buf_count": 2048 00:22:45.129 } 00:22:45.129 } 00:22:45.129 ] 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "subsystem": "bdev", 00:22:45.129 "config": [ 00:22:45.129 { 00:22:45.129 "method": "bdev_set_options", 00:22:45.129 "params": { 00:22:45.129 "bdev_io_pool_size": 65535, 00:22:45.129 "bdev_io_cache_size": 256, 00:22:45.129 "bdev_auto_examine": true, 00:22:45.129 "iobuf_small_cache_size": 128, 00:22:45.129 "iobuf_large_cache_size": 16 00:22:45.129 } 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "method": "bdev_raid_set_options", 00:22:45.129 "params": { 00:22:45.129 "process_window_size_kb": 1024 00:22:45.129 } 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "method": "bdev_iscsi_set_options", 00:22:45.129 "params": { 00:22:45.129 "timeout_sec": 30 00:22:45.129 } 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "method": "bdev_nvme_set_options", 00:22:45.129 "params": { 00:22:45.129 "action_on_timeout": "none", 00:22:45.129 "timeout_us": 0, 00:22:45.129 "timeout_admin_us": 0, 00:22:45.129 "keep_alive_timeout_ms": 10000, 00:22:45.129 "arbitration_burst": 0, 00:22:45.129 "low_priority_weight": 0, 00:22:45.129 "medium_priority_weight": 0, 00:22:45.129 "high_priority_weight": 0, 00:22:45.129 "nvme_adminq_poll_period_us": 10000, 00:22:45.129 "nvme_ioq_poll_period_us": 0, 00:22:45.129 "io_queue_requests": 0, 00:22:45.129 "delay_cmd_submit": true, 00:22:45.129 "transport_retry_count": 4, 00:22:45.129 "bdev_retry_count": 3, 00:22:45.129 "transport_ack_timeout": 0, 00:22:45.129 "ctrlr_loss_timeout_sec": 0, 00:22:45.129 "reconnect_delay_sec": 0, 00:22:45.129 "fast_io_fail_timeout_sec": 0, 00:22:45.129 "disable_auto_failback": false, 00:22:45.129 "generate_uuids": false, 00:22:45.129 "transport_tos": 0, 00:22:45.129 "nvme_error_stat": false, 00:22:45.129 "rdma_srq_size": 0, 00:22:45.129 "io_path_stat": false, 00:22:45.129 "allow_accel_sequence": false, 00:22:45.129 "rdma_max_cq_size": 0, 00:22:45.129 "rdma_cm_event_timeout_ms": 0, 00:22:45.129 "dhchap_digests": [ 00:22:45.129 "sha256", 00:22:45.129 "sha384", 00:22:45.129 "sha512" 00:22:45.129 ], 00:22:45.129 "dhchap_dhgroups": [ 00:22:45.129 "null", 00:22:45.129 "ffdhe2048", 00:22:45.129 "ffdhe3072", 00:22:45.129 "ffdhe4096", 00:22:45.129 "ffdhe6144", 00:22:45.129 "ffdhe8192" 00:22:45.129 ] 00:22:45.129 } 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "method": "bdev_nvme_set_hotplug", 00:22:45.129 "params": { 00:22:45.129 "period_us": 100000, 00:22:45.129 "enable": false 00:22:45.129 } 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "method": "bdev_malloc_create", 00:22:45.129 "params": { 00:22:45.129 "name": "malloc0", 00:22:45.129 "num_blocks": 8192, 00:22:45.129 "block_size": 4096, 00:22:45.129 "physical_block_size": 4096, 00:22:45.129 "uuid": "1935377f-ba2b-4c7a-abf2-bf603e8ef216", 00:22:45.129 "optimal_io_boundary": 0 00:22:45.129 } 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "method": "bdev_wait_for_examine" 00:22:45.129 } 00:22:45.129 ] 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "subsystem": "nbd", 00:22:45.129 "config": [] 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "subsystem": "scheduler", 00:22:45.129 "config": [ 00:22:45.129 { 00:22:45.129 "method": "framework_set_scheduler", 00:22:45.129 "params": { 00:22:45.129 "name": "static" 00:22:45.129 } 00:22:45.129 } 00:22:45.129 ] 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "subsystem": "nvmf", 00:22:45.129 "config": [ 00:22:45.129 { 00:22:45.129 "method": "nvmf_set_config", 00:22:45.129 "params": { 00:22:45.129 "discovery_filter": "match_any", 00:22:45.129 "admin_cmd_passthru": { 00:22:45.129 "identify_ctrlr": false 00:22:45.129 } 00:22:45.129 } 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "method": "nvmf_set_max_subsystems", 00:22:45.129 "params": { 00:22:45.129 "max_subsystems": 1024 00:22:45.129 } 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "method": "nvmf_set_crdt", 00:22:45.129 "params": { 00:22:45.129 "crdt1": 0, 00:22:45.129 "crdt2": 0, 00:22:45.129 "crdt3": 0 00:22:45.129 } 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "method": "nvmf_create_transport", 00:22:45.129 "params": { 00:22:45.129 "trtype": "TCP", 00:22:45.129 "max_queue_depth": 128, 00:22:45.129 "max_io_qpairs_per_ctrlr": 127, 00:22:45.129 "in_capsule_data_size": 4096, 00:22:45.129 "max_io_size": 131072, 00:22:45.129 "io_unit_size": 131072, 00:22:45.129 "max_aq_depth": 128, 00:22:45.129 "num_shared_buffers": 511, 00:22:45.129 "buf_cache_size": 4294967295, 00:22:45.129 "dif_insert_or_strip": false, 00:22:45.129 "zcopy": false, 00:22:45.129 "c2h_success": false, 00:22:45.129 "sock_priority": 0, 00:22:45.129 "abort_timeout_sec": 1, 00:22:45.129 "ack_timeout": 0, 00:22:45.129 "data_wr_pool_size": 0 00:22:45.129 } 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "method": "nvmf_create_subsystem", 00:22:45.129 "params": { 00:22:45.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.129 "allow_any_host": false, 00:22:45.129 "serial_number": "00000000000000000000", 00:22:45.129 "model_number": "SPDK bdev Controller", 00:22:45.129 "max_namespaces": 32, 00:22:45.129 "min_cntlid": 1, 00:22:45.129 "max_cntlid": 65519, 00:22:45.129 "ana_reporting": false 00:22:45.129 } 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "method": "nvmf_subsystem_add_host", 00:22:45.129 "params": { 00:22:45.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.129 "host": "nqn.2016-06.io.spdk:host1", 00:22:45.129 "psk": "key0" 00:22:45.129 } 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "method": "nvmf_subsystem_add_ns", 00:22:45.129 "params": { 00:22:45.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.129 "namespace": { 00:22:45.129 "nsid": 1, 00:22:45.129 "bdev_name": "malloc0", 00:22:45.129 "nguid": "1935377FBA2B4C7AABF2BF603E8EF216", 00:22:45.129 "uuid": "1935377f-ba2b-4c7a-abf2-bf603e8ef216", 00:22:45.129 "no_auto_visible": false 00:22:45.129 } 00:22:45.129 } 00:22:45.129 }, 00:22:45.129 { 00:22:45.129 "method": "nvmf_subsystem_add_listener", 00:22:45.129 "params": { 00:22:45.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.129 "listen_address": { 00:22:45.129 "trtype": "TCP", 00:22:45.129 "adrfam": "IPv4", 00:22:45.129 "traddr": "10.0.0.2", 00:22:45.129 "trsvcid": "4420" 00:22:45.129 }, 00:22:45.129 "secure_channel": true 00:22:45.129 } 00:22:45.130 } 00:22:45.130 ] 00:22:45.130 } 00:22:45.130 ] 00:22:45.130 }' 00:22:45.130 09:31:32 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:45.441 09:31:32 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:22:45.441 "subsystems": [ 00:22:45.441 { 00:22:45.441 "subsystem": "keyring", 00:22:45.441 "config": [ 00:22:45.441 { 00:22:45.441 "method": "keyring_file_add_key", 00:22:45.441 "params": { 00:22:45.441 "name": "key0", 00:22:45.441 "path": "/tmp/tmp.Void7Inglv" 00:22:45.441 } 00:22:45.441 } 00:22:45.441 ] 00:22:45.441 }, 00:22:45.441 { 00:22:45.441 "subsystem": "iobuf", 00:22:45.441 "config": [ 00:22:45.441 { 00:22:45.441 "method": "iobuf_set_options", 00:22:45.441 "params": { 00:22:45.441 "small_pool_count": 8192, 00:22:45.441 "large_pool_count": 1024, 00:22:45.441 "small_bufsize": 8192, 00:22:45.441 "large_bufsize": 135168 00:22:45.441 } 00:22:45.441 } 00:22:45.441 ] 00:22:45.441 }, 00:22:45.441 { 00:22:45.441 "subsystem": "sock", 00:22:45.441 "config": [ 00:22:45.441 { 00:22:45.441 "method": "sock_set_default_impl", 00:22:45.441 "params": { 00:22:45.441 "impl_name": "posix" 00:22:45.441 } 00:22:45.441 }, 00:22:45.441 { 00:22:45.441 "method": "sock_impl_set_options", 00:22:45.441 "params": { 00:22:45.441 "impl_name": "ssl", 00:22:45.441 "recv_buf_size": 4096, 00:22:45.441 "send_buf_size": 4096, 00:22:45.441 "enable_recv_pipe": true, 00:22:45.441 "enable_quickack": false, 00:22:45.441 "enable_placement_id": 0, 00:22:45.441 "enable_zerocopy_send_server": true, 00:22:45.441 "enable_zerocopy_send_client": false, 00:22:45.441 "zerocopy_threshold": 0, 00:22:45.441 "tls_version": 0, 00:22:45.441 "enable_ktls": false 00:22:45.441 } 00:22:45.441 }, 00:22:45.441 { 00:22:45.441 "method": "sock_impl_set_options", 00:22:45.441 "params": { 00:22:45.441 "impl_name": "posix", 00:22:45.441 "recv_buf_size": 2097152, 00:22:45.441 "send_buf_size": 2097152, 00:22:45.441 "enable_recv_pipe": true, 00:22:45.441 "enable_quickack": false, 00:22:45.441 "enable_placement_id": 0, 00:22:45.441 "enable_zerocopy_send_server": true, 00:22:45.441 "enable_zerocopy_send_client": false, 00:22:45.441 "zerocopy_threshold": 0, 00:22:45.441 "tls_version": 0, 00:22:45.441 "enable_ktls": false 00:22:45.441 } 00:22:45.441 } 00:22:45.441 ] 00:22:45.441 }, 00:22:45.441 { 00:22:45.441 "subsystem": "vmd", 00:22:45.441 "config": [] 00:22:45.441 }, 00:22:45.441 { 00:22:45.441 "subsystem": "accel", 00:22:45.441 "config": [ 00:22:45.441 { 00:22:45.442 "method": "accel_set_options", 00:22:45.442 "params": { 00:22:45.442 "small_cache_size": 128, 00:22:45.442 "large_cache_size": 16, 00:22:45.442 "task_count": 2048, 00:22:45.442 "sequence_count": 2048, 00:22:45.442 "buf_count": 2048 00:22:45.442 } 00:22:45.442 } 00:22:45.442 ] 00:22:45.442 }, 00:22:45.442 { 00:22:45.442 "subsystem": "bdev", 00:22:45.442 "config": [ 00:22:45.442 { 00:22:45.442 "method": "bdev_set_options", 00:22:45.442 "params": { 00:22:45.442 "bdev_io_pool_size": 65535, 00:22:45.442 "bdev_io_cache_size": 256, 00:22:45.442 "bdev_auto_examine": true, 00:22:45.442 "iobuf_small_cache_size": 128, 00:22:45.442 "iobuf_large_cache_size": 16 00:22:45.442 } 00:22:45.442 }, 00:22:45.442 { 00:22:45.442 "method": "bdev_raid_set_options", 00:22:45.442 "params": { 00:22:45.442 "process_window_size_kb": 1024 00:22:45.442 } 00:22:45.442 }, 00:22:45.442 { 00:22:45.442 "method": "bdev_iscsi_set_options", 00:22:45.442 "params": { 00:22:45.442 "timeout_sec": 30 00:22:45.442 } 00:22:45.442 }, 00:22:45.442 { 00:22:45.442 "method": "bdev_nvme_set_options", 00:22:45.442 "params": { 00:22:45.442 "action_on_timeout": "none", 00:22:45.442 "timeout_us": 0, 00:22:45.442 "timeout_admin_us": 0, 00:22:45.442 "keep_alive_timeout_ms": 10000, 00:22:45.442 "arbitration_burst": 0, 00:22:45.442 "low_priority_weight": 0, 00:22:45.442 "medium_priority_weight": 0, 00:22:45.442 "high_priority_weight": 0, 00:22:45.442 "nvme_adminq_poll_period_us": 10000, 00:22:45.442 "nvme_ioq_poll_period_us": 0, 00:22:45.442 "io_queue_requests": 512, 00:22:45.442 "delay_cmd_submit": true, 00:22:45.442 "transport_retry_count": 4, 00:22:45.442 "bdev_retry_count": 3, 00:22:45.442 "transport_ack_timeout": 0, 00:22:45.442 "ctrlr_loss_timeout_sec": 0, 00:22:45.442 "reconnect_delay_sec": 0, 00:22:45.442 "fast_io_fail_timeout_sec": 0, 00:22:45.442 "disable_auto_failback": false, 00:22:45.442 "generate_uuids": false, 00:22:45.442 "transport_tos": 0, 00:22:45.442 "nvme_error_stat": false, 00:22:45.442 "rdma_srq_size": 0, 00:22:45.442 "io_path_stat": false, 00:22:45.442 "allow_accel_sequence": false, 00:22:45.442 "rdma_max_cq_size": 0, 00:22:45.442 "rdma_cm_event_timeout_ms": 0, 00:22:45.442 "dhchap_digests": [ 00:22:45.442 "sha256", 00:22:45.442 "sha384", 00:22:45.442 "sha512" 00:22:45.442 ], 00:22:45.442 "dhchap_dhgroups": [ 00:22:45.442 "null", 00:22:45.442 "ffdhe2048", 00:22:45.442 "ffdhe3072", 00:22:45.442 "ffdhe4096", 00:22:45.442 "ffdhe6144", 00:22:45.442 "ffdhe8192" 00:22:45.442 ] 00:22:45.442 } 00:22:45.442 }, 00:22:45.442 { 00:22:45.442 "method": "bdev_nvme_attach_controller", 00:22:45.442 "params": { 00:22:45.442 "name": "nvme0", 00:22:45.442 "trtype": "TCP", 00:22:45.442 "adrfam": "IPv4", 00:22:45.442 "traddr": "10.0.0.2", 00:22:45.442 "trsvcid": "4420", 00:22:45.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.442 "prchk_reftag": false, 00:22:45.442 "prchk_guard": false, 00:22:45.442 "ctrlr_loss_timeout_sec": 0, 00:22:45.442 "reconnect_delay_sec": 0, 00:22:45.442 "fast_io_fail_timeout_sec": 0, 00:22:45.442 "psk": "key0", 00:22:45.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.442 "hdgst": false, 00:22:45.442 "ddgst": false 00:22:45.442 } 00:22:45.442 }, 00:22:45.442 { 00:22:45.442 "method": "bdev_nvme_set_hotplug", 00:22:45.442 "params": { 00:22:45.442 "period_us": 100000, 00:22:45.442 "enable": false 00:22:45.442 } 00:22:45.442 }, 00:22:45.442 { 00:22:45.442 "method": "bdev_enable_histogram", 00:22:45.442 "params": { 00:22:45.442 "name": "nvme0n1", 00:22:45.442 "enable": true 00:22:45.442 } 00:22:45.442 }, 00:22:45.442 { 00:22:45.442 "method": "bdev_wait_for_examine" 00:22:45.442 } 00:22:45.442 ] 00:22:45.442 }, 00:22:45.442 { 00:22:45.442 "subsystem": "nbd", 00:22:45.442 "config": [] 00:22:45.442 } 00:22:45.442 ] 00:22:45.442 }' 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 745914 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 745914 ']' 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 745914 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 745914 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 745914' 00:22:45.442 killing process with pid 745914 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 745914 00:22:45.442 Received shutdown signal, test time was about 1.000000 seconds 00:22:45.442 00:22:45.442 Latency(us) 00:22:45.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.442 =================================================================================================================== 00:22:45.442 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 745914 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 745590 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 745590 ']' 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 745590 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 745590 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 745590' 00:22:45.442 killing process with pid 745590 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 745590 00:22:45.442 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 745590 00:22:45.730 09:31:32 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:22:45.730 09:31:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:45.730 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:45.730 09:31:32 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:22:45.730 "subsystems": [ 00:22:45.730 { 00:22:45.730 "subsystem": "keyring", 00:22:45.730 "config": [ 00:22:45.730 { 00:22:45.730 "method": "keyring_file_add_key", 00:22:45.730 "params": { 00:22:45.730 "name": "key0", 00:22:45.730 "path": "/tmp/tmp.Void7Inglv" 00:22:45.730 } 00:22:45.730 } 00:22:45.730 ] 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "subsystem": "iobuf", 00:22:45.730 "config": [ 00:22:45.730 { 00:22:45.730 "method": "iobuf_set_options", 00:22:45.730 "params": { 00:22:45.730 "small_pool_count": 8192, 00:22:45.730 "large_pool_count": 1024, 00:22:45.730 "small_bufsize": 8192, 00:22:45.730 "large_bufsize": 135168 00:22:45.730 } 00:22:45.730 } 00:22:45.730 ] 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "subsystem": "sock", 00:22:45.730 "config": [ 00:22:45.730 { 00:22:45.730 "method": "sock_set_default_impl", 00:22:45.730 "params": { 00:22:45.730 "impl_name": "posix" 00:22:45.730 } 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "method": "sock_impl_set_options", 00:22:45.730 "params": { 00:22:45.730 "impl_name": "ssl", 00:22:45.730 "recv_buf_size": 4096, 00:22:45.730 "send_buf_size": 4096, 00:22:45.730 "enable_recv_pipe": true, 00:22:45.730 "enable_quickack": false, 00:22:45.730 "enable_placement_id": 0, 00:22:45.730 "enable_zerocopy_send_server": true, 00:22:45.730 "enable_zerocopy_send_client": false, 00:22:45.730 "zerocopy_threshold": 0, 00:22:45.730 "tls_version": 0, 00:22:45.730 "enable_ktls": false 00:22:45.730 } 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "method": "sock_impl_set_options", 00:22:45.730 "params": { 00:22:45.730 "impl_name": "posix", 00:22:45.730 "recv_buf_size": 2097152, 00:22:45.730 "send_buf_size": 2097152, 00:22:45.730 "enable_recv_pipe": true, 00:22:45.730 "enable_quickack": false, 00:22:45.730 "enable_placement_id": 0, 00:22:45.730 "enable_zerocopy_send_server": true, 00:22:45.730 "enable_zerocopy_send_client": false, 00:22:45.730 "zerocopy_threshold": 0, 00:22:45.730 "tls_version": 0, 00:22:45.730 "enable_ktls": false 00:22:45.730 } 00:22:45.730 } 00:22:45.730 ] 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "subsystem": "vmd", 00:22:45.730 "config": [] 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "subsystem": "accel", 00:22:45.730 "config": [ 00:22:45.730 { 00:22:45.730 "method": "accel_set_options", 00:22:45.730 "params": { 00:22:45.730 "small_cache_size": 128, 00:22:45.730 "large_cache_size": 16, 00:22:45.730 "task_count": 2048, 00:22:45.730 "sequence_count": 2048, 00:22:45.730 "buf_count": 2048 00:22:45.730 } 00:22:45.730 } 00:22:45.730 ] 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "subsystem": "bdev", 00:22:45.730 "config": [ 00:22:45.730 { 00:22:45.730 "method": "bdev_set_options", 00:22:45.730 "params": { 00:22:45.730 "bdev_io_pool_size": 65535, 00:22:45.730 "bdev_io_cache_size": 256, 00:22:45.730 "bdev_auto_examine": true, 00:22:45.730 "iobuf_small_cache_size": 128, 00:22:45.730 "iobuf_large_cache_size": 16 00:22:45.730 } 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "method": "bdev_raid_set_options", 00:22:45.730 "params": { 00:22:45.730 "process_window_size_kb": 1024 00:22:45.730 } 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "method": "bdev_iscsi_set_options", 00:22:45.730 "params": { 00:22:45.730 "timeout_sec": 30 00:22:45.730 } 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "method": "bdev_nvme_set_options", 00:22:45.730 "params": { 00:22:45.730 "action_on_timeout": "none", 00:22:45.730 "timeout_us": 0, 00:22:45.730 "timeout_admin_us": 0, 00:22:45.730 "keep_alive_timeout_ms": 10000, 00:22:45.730 "arbitration_burst": 0, 00:22:45.730 "low_priority_weight": 0, 00:22:45.730 "medium_priority_weight": 0, 00:22:45.730 "high_priority_weight": 0, 00:22:45.730 "nvme_adminq_poll_period_us": 10000, 00:22:45.730 "nvme_ioq_poll_period_us": 0, 00:22:45.730 "io_queue_requests": 0, 00:22:45.730 "delay_cmd_submit": true, 00:22:45.730 "transport_retry_count": 4, 00:22:45.730 "bdev_retry_count": 3, 00:22:45.730 "transport_ack_timeout": 0, 00:22:45.730 "ctrlr_loss_timeout_sec": 0, 00:22:45.730 "reconnect_delay_sec": 0, 00:22:45.730 "fast_io_fail_timeout_sec": 0, 00:22:45.730 "disable_auto_failback": false, 00:22:45.730 "generate_uuids": false, 00:22:45.730 "transport_tos": 0, 00:22:45.730 "nvme_error_stat": false, 00:22:45.730 "rdma_srq_size": 0, 00:22:45.730 "io_path_stat": false, 00:22:45.730 "allow_accel_sequence": false, 00:22:45.730 "rdma_max_cq_size": 0, 00:22:45.730 "rdma_cm_event_timeout_ms": 0, 00:22:45.730 "dhchap_digests": [ 00:22:45.730 "sha256", 00:22:45.730 "sha384", 00:22:45.730 "sha512" 00:22:45.730 ], 00:22:45.730 "dhchap_dhgroups": [ 00:22:45.730 "null", 00:22:45.730 "ffdhe2048", 00:22:45.730 "ffdhe3072", 00:22:45.730 "ffdhe4096", 00:22:45.730 "ffdhe6144", 00:22:45.730 "ffdhe8192" 00:22:45.730 ] 00:22:45.730 } 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "method": "bdev_nvme_set_hotplug", 00:22:45.730 "params": { 00:22:45.730 "period_us": 100000, 00:22:45.730 "enable": false 00:22:45.730 } 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "method": "bdev_malloc_create", 00:22:45.730 "params": { 00:22:45.730 "name": "malloc0", 00:22:45.730 "num_blocks": 8192, 00:22:45.730 "block_size": 4096, 00:22:45.730 "physical_block_size": 4096, 00:22:45.730 "uuid": "1935377f-ba2b-4c7a-abf2-bf603e8ef216", 00:22:45.730 "optimal_io_boundary": 0 00:22:45.730 } 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "method": "bdev_wait_for_examine" 00:22:45.730 } 00:22:45.730 ] 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "subsystem": "nbd", 00:22:45.730 "config": [] 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "subsystem": "scheduler", 00:22:45.730 "config": [ 00:22:45.730 { 00:22:45.730 "method": "framework_set_scheduler", 00:22:45.730 "params": { 00:22:45.730 "name": "static" 00:22:45.730 } 00:22:45.730 } 00:22:45.730 ] 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "subsystem": "nvmf", 00:22:45.730 "config": [ 00:22:45.730 { 00:22:45.730 "method": "nvmf_set_config", 00:22:45.730 "params": { 00:22:45.730 "discovery_filter": "match_any", 00:22:45.730 "admin_cmd_passthru": { 00:22:45.730 "identify_ctrlr": false 00:22:45.730 } 00:22:45.730 } 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "method": "nvmf_set_max_subsystems", 00:22:45.730 "params": { 00:22:45.730 "max_subsystems": 1024 00:22:45.730 } 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "method": "nvmf_set_crdt", 00:22:45.730 "params": { 00:22:45.730 "crdt1": 0, 00:22:45.730 "crdt2": 0, 00:22:45.730 "crdt3": 0 00:22:45.730 } 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "method": "nvmf_create_transport", 00:22:45.730 "params": { 00:22:45.730 "trtype": "TCP", 00:22:45.730 "max_queue_depth": 128, 00:22:45.730 "max_io_qpairs_per_ctrlr": 127, 00:22:45.730 "in_capsule_data_size": 4096, 00:22:45.730 "max_io_size": 131072, 00:22:45.730 "io_unit_size": 131072, 00:22:45.730 "max_aq_depth": 128, 00:22:45.730 "num_shared_buffers": 511, 00:22:45.730 "buf_cache_size": 4294967295, 00:22:45.730 "dif_insert_or_strip": false, 00:22:45.730 "zcopy": false, 00:22:45.730 "c2h_success": false, 00:22:45.730 "sock_priority": 0, 00:22:45.730 "abort_timeout_sec": 1, 00:22:45.730 "ack_timeout": 0, 00:22:45.730 "data_wr_pool_size": 0 00:22:45.730 } 00:22:45.730 }, 00:22:45.730 { 00:22:45.730 "method": "nvmf_create_subsystem", 00:22:45.730 "params": { 00:22:45.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.731 "allow_any_host": false, 00:22:45.731 "serial_number": "00000000000000000000", 00:22:45.731 "model_number": "SPDK bdev Controller", 00:22:45.731 "max_namespaces": 32, 00:22:45.731 "min_cntlid": 1, 00:22:45.731 "max_cntlid": 65519, 00:22:45.731 "ana_reporting": false 00:22:45.731 } 00:22:45.731 }, 00:22:45.731 { 00:22:45.731 "method": "nvmf_subsystem_add_host", 00:22:45.731 "params": { 00:22:45.731 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.731 "host": "nqn.2016-06.io.spdk:host1", 00:22:45.731 "psk": "key0" 00:22:45.731 } 00:22:45.731 }, 00:22:45.731 { 00:22:45.731 "method": "nvmf_subsystem_add_ns", 00:22:45.731 "params": { 00:22:45.731 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.731 "namespace": { 00:22:45.731 "nsid": 1, 00:22:45.731 "bdev_name": "malloc0", 00:22:45.731 "nguid": "1935377FBA2B4C7AABF2BF603E8EF216", 00:22:45.731 "uuid": "1935377f-ba2b-4c7a-abf2-bf603e8ef216", 00:22:45.731 "no_auto_visible": false 00:22:45.731 } 00:22:45.731 } 00:22:45.731 }, 00:22:45.731 { 00:22:45.731 "method": "nvmf_subsystem_add_listener", 00:22:45.731 "params": { 00:22:45.731 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.731 "listen_address": { 00:22:45.731 "trtype": "TCP", 00:22:45.731 "adrfam": "IPv4", 00:22:45.731 "traddr": "10.0.0.2", 00:22:45.731 "trsvcid": "4420" 00:22:45.731 }, 00:22:45.731 "secure_channel": true 00:22:45.731 } 00:22:45.731 } 00:22:45.731 ] 00:22:45.731 } 00:22:45.731 ] 00:22:45.731 }' 00:22:45.731 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.731 09:31:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=746553 00:22:45.731 09:31:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 746553 00:22:45.731 09:31:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:45.731 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 746553 ']' 00:22:45.731 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.731 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:45.731 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.731 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:45.731 09:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.731 [2024-07-15 09:31:32.754499] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:45.731 [2024-07-15 09:31:32.754555] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.731 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.731 [2024-07-15 09:31:32.827747] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.731 [2024-07-15 09:31:32.892895] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.731 [2024-07-15 09:31:32.892935] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.731 [2024-07-15 09:31:32.892943] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.731 [2024-07-15 09:31:32.892949] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.731 [2024-07-15 09:31:32.892955] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.731 [2024-07-15 09:31:32.893019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.991 [2024-07-15 09:31:33.090001] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.991 [2024-07-15 09:31:33.122012] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:45.991 [2024-07-15 09:31:33.131027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.562 09:31:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:46.562 09:31:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:46.562 09:31:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:46.562 09:31:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:46.562 09:31:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.562 09:31:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.562 09:31:33 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=746632 00:22:46.562 09:31:33 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 746632 /var/tmp/bdevperf.sock 00:22:46.562 09:31:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 746632 ']' 00:22:46.562 09:31:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.562 09:31:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:46.562 09:31:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.562 09:31:33 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:46.562 09:31:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:46.562 09:31:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.562 09:31:33 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:22:46.562 "subsystems": [ 00:22:46.562 { 00:22:46.562 "subsystem": "keyring", 00:22:46.562 "config": [ 00:22:46.562 { 00:22:46.562 "method": "keyring_file_add_key", 00:22:46.562 "params": { 00:22:46.562 "name": "key0", 00:22:46.562 "path": "/tmp/tmp.Void7Inglv" 00:22:46.562 } 00:22:46.562 } 00:22:46.562 ] 00:22:46.562 }, 00:22:46.562 { 00:22:46.562 "subsystem": "iobuf", 00:22:46.562 "config": [ 00:22:46.562 { 00:22:46.562 "method": "iobuf_set_options", 00:22:46.562 "params": { 00:22:46.562 "small_pool_count": 8192, 00:22:46.562 "large_pool_count": 1024, 00:22:46.562 "small_bufsize": 8192, 00:22:46.562 "large_bufsize": 135168 00:22:46.562 } 00:22:46.562 } 00:22:46.562 ] 00:22:46.562 }, 00:22:46.562 { 00:22:46.562 "subsystem": "sock", 00:22:46.562 "config": [ 00:22:46.562 { 00:22:46.562 "method": "sock_set_default_impl", 00:22:46.562 "params": { 00:22:46.562 "impl_name": "posix" 00:22:46.562 } 00:22:46.562 }, 00:22:46.562 { 00:22:46.562 "method": "sock_impl_set_options", 00:22:46.562 "params": { 00:22:46.562 "impl_name": "ssl", 00:22:46.562 "recv_buf_size": 4096, 00:22:46.562 "send_buf_size": 4096, 00:22:46.562 "enable_recv_pipe": true, 00:22:46.562 "enable_quickack": false, 00:22:46.562 "enable_placement_id": 0, 00:22:46.562 "enable_zerocopy_send_server": true, 00:22:46.562 "enable_zerocopy_send_client": false, 00:22:46.562 "zerocopy_threshold": 0, 00:22:46.562 "tls_version": 0, 00:22:46.562 "enable_ktls": false 00:22:46.562 } 00:22:46.562 }, 00:22:46.562 { 00:22:46.562 "method": "sock_impl_set_options", 00:22:46.562 "params": { 00:22:46.562 "impl_name": "posix", 00:22:46.562 "recv_buf_size": 2097152, 00:22:46.562 "send_buf_size": 2097152, 00:22:46.562 "enable_recv_pipe": true, 00:22:46.562 "enable_quickack": false, 00:22:46.562 "enable_placement_id": 0, 00:22:46.562 "enable_zerocopy_send_server": true, 00:22:46.562 "enable_zerocopy_send_client": false, 00:22:46.562 "zerocopy_threshold": 0, 00:22:46.562 "tls_version": 0, 00:22:46.562 "enable_ktls": false 00:22:46.562 } 00:22:46.562 } 00:22:46.562 ] 00:22:46.562 }, 00:22:46.562 { 00:22:46.562 "subsystem": "vmd", 00:22:46.562 "config": [] 00:22:46.562 }, 00:22:46.562 { 00:22:46.562 "subsystem": "accel", 00:22:46.562 "config": [ 00:22:46.562 { 00:22:46.562 "method": "accel_set_options", 00:22:46.562 "params": { 00:22:46.562 "small_cache_size": 128, 00:22:46.562 "large_cache_size": 16, 00:22:46.562 "task_count": 2048, 00:22:46.562 "sequence_count": 2048, 00:22:46.562 "buf_count": 2048 00:22:46.562 } 00:22:46.562 } 00:22:46.562 ] 00:22:46.562 }, 00:22:46.562 { 00:22:46.562 "subsystem": "bdev", 00:22:46.562 "config": [ 00:22:46.562 { 00:22:46.562 "method": "bdev_set_options", 00:22:46.562 "params": { 00:22:46.562 "bdev_io_pool_size": 65535, 00:22:46.562 "bdev_io_cache_size": 256, 00:22:46.562 "bdev_auto_examine": true, 00:22:46.562 "iobuf_small_cache_size": 128, 00:22:46.562 "iobuf_large_cache_size": 16 00:22:46.562 } 00:22:46.563 }, 00:22:46.563 { 00:22:46.563 "method": "bdev_raid_set_options", 00:22:46.563 "params": { 00:22:46.563 "process_window_size_kb": 1024 00:22:46.563 } 00:22:46.563 }, 00:22:46.563 { 00:22:46.563 "method": "bdev_iscsi_set_options", 00:22:46.563 "params": { 00:22:46.563 "timeout_sec": 30 00:22:46.563 } 00:22:46.563 }, 00:22:46.563 { 00:22:46.563 "method": "bdev_nvme_set_options", 00:22:46.563 "params": { 00:22:46.563 "action_on_timeout": "none", 00:22:46.563 "timeout_us": 0, 00:22:46.563 "timeout_admin_us": 0, 00:22:46.563 "keep_alive_timeout_ms": 10000, 00:22:46.563 "arbitration_burst": 0, 00:22:46.563 "low_priority_weight": 0, 00:22:46.563 "medium_priority_weight": 0, 00:22:46.563 "high_priority_weight": 0, 00:22:46.563 "nvme_adminq_poll_period_us": 10000, 00:22:46.563 "nvme_ioq_poll_period_us": 0, 00:22:46.563 "io_queue_requests": 512, 00:22:46.563 "delay_cmd_submit": true, 00:22:46.563 "transport_retry_count": 4, 00:22:46.563 "bdev_retry_count": 3, 00:22:46.563 "transport_ack_timeout": 0, 00:22:46.563 "ctrlr_loss_timeout_sec": 0, 00:22:46.563 "reconnect_delay_sec": 0, 00:22:46.563 "fast_io_fail_timeout_sec": 0, 00:22:46.563 "disable_auto_failback": false, 00:22:46.563 "generate_uuids": false, 00:22:46.563 "transport_tos": 0, 00:22:46.563 "nvme_error_stat": false, 00:22:46.563 "rdma_srq_size": 0, 00:22:46.563 "io_path_stat": false, 00:22:46.563 "allow_accel_sequence": false, 00:22:46.563 "rdma_max_cq_size": 0, 00:22:46.563 "rdma_cm_event_timeout_ms": 0, 00:22:46.563 "dhchap_digests": [ 00:22:46.563 "sha256", 00:22:46.563 "sha384", 00:22:46.563 "sha512" 00:22:46.563 ], 00:22:46.563 "dhchap_dhgroups": [ 00:22:46.563 "null", 00:22:46.563 "ffdhe2048", 00:22:46.563 "ffdhe3072", 00:22:46.563 "ffdhe4096", 00:22:46.563 "ffdhe6144", 00:22:46.563 "ffdhe8192" 00:22:46.563 ] 00:22:46.563 } 00:22:46.563 }, 00:22:46.563 { 00:22:46.563 "method": "bdev_nvme_attach_controller", 00:22:46.563 "params": { 00:22:46.563 "name": "nvme0", 00:22:46.563 "trtype": "TCP", 00:22:46.563 "adrfam": "IPv4", 00:22:46.563 "traddr": "10.0.0.2", 00:22:46.563 "trsvcid": "4420", 00:22:46.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.563 "prchk_reftag": false, 00:22:46.563 "prchk_guard": false, 00:22:46.563 "ctrlr_loss_timeout_sec": 0, 00:22:46.563 "reconnect_delay_sec": 0, 00:22:46.563 "fast_io_fail_timeout_sec": 0, 00:22:46.563 "psk": "key0", 00:22:46.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:46.563 "hdgst": false, 00:22:46.563 "ddgst": false 00:22:46.563 } 00:22:46.563 }, 00:22:46.563 { 00:22:46.563 "method": "bdev_nvme_set_hotplug", 00:22:46.563 "params": { 00:22:46.563 "period_us": 100000, 00:22:46.563 "enable": false 00:22:46.563 } 00:22:46.563 }, 00:22:46.563 { 00:22:46.563 "method": "bdev_enable_histogram", 00:22:46.563 "params": { 00:22:46.563 "name": "nvme0n1", 00:22:46.563 "enable": true 00:22:46.563 } 00:22:46.563 }, 00:22:46.563 { 00:22:46.563 "method": "bdev_wait_for_examine" 00:22:46.563 } 00:22:46.563 ] 00:22:46.563 }, 00:22:46.563 { 00:22:46.563 "subsystem": "nbd", 00:22:46.563 "config": [] 00:22:46.563 } 00:22:46.563 ] 00:22:46.563 }' 00:22:46.563 [2024-07-15 09:31:33.618931] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:46.563 [2024-07-15 09:31:33.618969] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid746632 ] 00:22:46.563 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.563 [2024-07-15 09:31:33.665250] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.563 [2024-07-15 09:31:33.719091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.824 [2024-07-15 09:31:33.852087] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:47.395 09:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:47.395 09:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:47.395 09:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:47.395 09:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:22:47.395 09:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.395 09:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:47.656 Running I/O for 1 seconds... 00:22:48.600 00:22:48.600 Latency(us) 00:22:48.600 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.600 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:48.600 Verification LBA range: start 0x0 length 0x2000 00:22:48.600 nvme0n1 : 1.04 5435.60 21.23 0.00 0.00 23121.05 5515.95 62477.65 00:22:48.600 =================================================================================================================== 00:22:48.600 Total : 5435.60 21.23 0.00 0.00 23121.05 5515.95 62477.65 00:22:48.600 0 00:22:48.600 09:31:35 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:22:48.600 09:31:35 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:22:48.600 09:31:35 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:48.600 09:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:22:48.600 09:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:22:48.600 09:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:48.600 09:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:48.600 09:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:48.600 09:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:48.600 09:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:48.600 09:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:48.600 nvmf_trace.0 00:22:48.600 09:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:22:48.600 09:31:35 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 746632 00:22:48.600 09:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 746632 ']' 00:22:48.600 09:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 746632 00:22:48.600 09:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:48.600 09:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:48.600 09:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 746632 00:22:48.861 09:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:48.861 09:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:48.861 09:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 746632' 00:22:48.861 killing process with pid 746632 00:22:48.861 09:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 746632 00:22:48.861 Received shutdown signal, test time was about 1.000000 seconds 00:22:48.861 00:22:48.861 Latency(us) 00:22:48.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.861 =================================================================================================================== 00:22:48.861 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:48.861 09:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 746632 00:22:48.861 09:31:35 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:48.861 09:31:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:48.861 09:31:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:48.861 09:31:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:48.861 09:31:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:48.861 09:31:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:48.861 09:31:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:48.861 rmmod nvme_tcp 00:22:48.861 rmmod nvme_fabrics 00:22:48.861 rmmod nvme_keyring 00:22:48.861 09:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:48.861 09:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:48.861 09:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:48.861 09:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 746553 ']' 00:22:48.861 09:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 746553 00:22:48.861 09:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 746553 ']' 00:22:48.861 09:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 746553 00:22:48.861 09:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:48.861 09:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:48.861 09:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 746553 00:22:49.121 09:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:49.121 09:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:49.121 09:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 746553' 00:22:49.121 killing process with pid 746553 00:22:49.121 09:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 746553 00:22:49.121 09:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 746553 00:22:49.121 09:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:49.121 09:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:49.121 09:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:49.121 09:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:49.121 09:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:49.121 09:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.121 09:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.121 09:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.667 09:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:51.667 09:31:38 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.xybYPI9MRx /tmp/tmp.SDeyLaygSW /tmp/tmp.Void7Inglv 00:22:51.667 00:22:51.667 real 1m23.783s 00:22:51.667 user 2m7.296s 00:22:51.667 sys 0m26.835s 00:22:51.667 09:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:51.667 09:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.667 ************************************ 00:22:51.667 END TEST nvmf_tls 00:22:51.667 ************************************ 00:22:51.667 09:31:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:51.667 09:31:38 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:51.667 09:31:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:51.667 09:31:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:51.667 09:31:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:51.667 ************************************ 00:22:51.667 START TEST nvmf_fips 00:22:51.667 ************************************ 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:51.667 * Looking for test storage... 00:22:51.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:51.667 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:22:51.668 Error setting digest 00:22:51.668 0032FE8CAE7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:51.668 0032FE8CAE7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:51.668 09:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:59.851 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.851 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:59.851 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:59.851 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:59.851 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:59.851 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:59.851 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:59.851 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:59.851 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:59.851 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:59.852 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:59.852 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:59.852 Found net devices under 0000:31:00.0: cvl_0_0 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:59.852 Found net devices under 0000:31:00.1: cvl_0_1 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:59.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:22:59.852 00:22:59.852 --- 10.0.0.2 ping statistics --- 00:22:59.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.852 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:22:59.852 00:22:59.852 --- 10.0.0.1 ping statistics --- 00:22:59.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.852 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=751972 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 751972 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 751972 ']' 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.852 09:31:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:59.852 [2024-07-15 09:31:47.014457] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:59.852 [2024-07-15 09:31:47.014527] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.113 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.113 [2024-07-15 09:31:47.111497] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.113 [2024-07-15 09:31:47.203886] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.113 [2024-07-15 09:31:47.203943] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.113 [2024-07-15 09:31:47.203951] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.113 [2024-07-15 09:31:47.203958] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.113 [2024-07-15 09:31:47.203965] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.113 [2024-07-15 09:31:47.203991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.683 09:31:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:00.683 09:31:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:00.683 09:31:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:00.683 09:31:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:00.683 09:31:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:00.683 09:31:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.683 09:31:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:00.683 09:31:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:00.683 09:31:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:00.683 09:31:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:00.683 09:31:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:00.683 09:31:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:00.683 09:31:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:00.683 09:31:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:00.944 [2024-07-15 09:31:47.971933] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.944 [2024-07-15 09:31:47.987926] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:00.944 [2024-07-15 09:31:47.988197] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.944 [2024-07-15 09:31:48.018126] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:00.944 malloc0 00:23:00.944 09:31:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:00.944 09:31:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=752061 00:23:00.944 09:31:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 752061 /var/tmp/bdevperf.sock 00:23:00.944 09:31:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:00.944 09:31:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 752061 ']' 00:23:00.944 09:31:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.945 09:31:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.945 09:31:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.945 09:31:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.945 09:31:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:00.945 [2024-07-15 09:31:48.128400] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:23:00.945 [2024-07-15 09:31:48.128472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid752061 ] 00:23:01.205 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.205 [2024-07-15 09:31:48.190441] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.205 [2024-07-15 09:31:48.257832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.776 09:31:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.776 09:31:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:01.776 09:31:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:02.036 [2024-07-15 09:31:49.013277] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:02.036 [2024-07-15 09:31:49.013338] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:02.036 TLSTESTn1 00:23:02.036 09:31:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:02.036 Running I/O for 10 seconds... 00:23:14.263 00:23:14.263 Latency(us) 00:23:14.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.263 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:14.263 Verification LBA range: start 0x0 length 0x2000 00:23:14.263 TLSTESTn1 : 10.04 3859.76 15.08 0.00 0.00 33112.32 6662.83 76021.76 00:23:14.263 =================================================================================================================== 00:23:14.263 Total : 3859.76 15.08 0.00 0.00 33112.32 6662.83 76021.76 00:23:14.263 0 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:14.263 nvmf_trace.0 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 752061 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 752061 ']' 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 752061 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 752061 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 752061' 00:23:14.263 killing process with pid 752061 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 752061 00:23:14.263 Received shutdown signal, test time was about 10.000000 seconds 00:23:14.263 00:23:14.263 Latency(us) 00:23:14.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.263 =================================================================================================================== 00:23:14.263 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.263 [2024-07-15 09:31:59.428596] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 752061 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:14.263 rmmod nvme_tcp 00:23:14.263 rmmod nvme_fabrics 00:23:14.263 rmmod nvme_keyring 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 751972 ']' 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 751972 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 751972 ']' 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 751972 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 751972 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 751972' 00:23:14.263 killing process with pid 751972 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 751972 00:23:14.263 [2024-07-15 09:31:59.666881] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 751972 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.263 09:31:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.834 09:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:14.834 09:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:14.834 00:23:14.834 real 0m23.502s 00:23:14.834 user 0m23.814s 00:23:14.834 sys 0m10.387s 00:23:14.834 09:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:14.834 09:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:14.834 ************************************ 00:23:14.834 END TEST nvmf_fips 00:23:14.834 ************************************ 00:23:14.834 09:32:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:14.834 09:32:01 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:23:14.834 09:32:01 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:23:14.834 09:32:01 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:23:14.834 09:32:01 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:23:14.834 09:32:01 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:23:14.834 09:32:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:22.977 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:22.977 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.977 09:32:09 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:22.978 Found net devices under 0000:31:00.0: cvl_0_0 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:22.978 Found net devices under 0000:31:00.1: cvl_0_1 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:23:22.978 09:32:09 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:22.978 09:32:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:22.978 09:32:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:22.978 09:32:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:22.978 ************************************ 00:23:22.978 START TEST nvmf_perf_adq 00:23:22.978 ************************************ 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:22.978 * Looking for test storage... 00:23:22.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:22.978 09:32:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:31.190 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:31.190 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:31.190 Found net devices under 0000:31:00.0: cvl_0_0 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:31.190 Found net devices under 0000:31:00.1: cvl_0_1 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:23:31.190 09:32:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:32.131 09:32:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:34.043 09:32:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:39.330 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:39.331 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:39.331 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:39.331 Found net devices under 0000:31:00.0: cvl_0_0 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:39.331 Found net devices under 0000:31:00.1: cvl_0_1 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:39.331 09:32:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:39.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:23:39.331 00:23:39.331 --- 10.0.0.2 ping statistics --- 00:23:39.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.331 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:23:39.331 00:23:39.331 --- 10.0.0.1 ping statistics --- 00:23:39.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.331 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=764992 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 764992 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 764992 ']' 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:39.331 09:32:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.331 [2024-07-15 09:32:26.192702] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:23:39.331 [2024-07-15 09:32:26.192775] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.331 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.331 [2024-07-15 09:32:26.271600] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.331 [2024-07-15 09:32:26.347747] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.331 [2024-07-15 09:32:26.347793] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.331 [2024-07-15 09:32:26.347801] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.331 [2024-07-15 09:32:26.347807] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.331 [2024-07-15 09:32:26.347812] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.331 [2024-07-15 09:32:26.348035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.331 [2024-07-15 09:32:26.348239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.331 [2024-07-15 09:32:26.348365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.331 [2024-07-15 09:32:26.348366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.902 09:32:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:39.902 09:32:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:23:39.902 09:32:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:39.902 09:32:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:39.902 09:32:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.902 09:32:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.902 09:32:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:23:39.902 09:32:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:39.902 09:32:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:39.902 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.902 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.902 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.902 09:32:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:39.902 09:32:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:39.902 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.902 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.902 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.902 09:32:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:39.902 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.902 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:40.163 [2024-07-15 09:32:27.155735] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:40.163 Malloc1 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:40.163 [2024-07-15 09:32:27.211020] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=765128 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:23:40.163 09:32:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:40.163 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.093 09:32:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:23:42.093 09:32:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.093 09:32:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:42.093 09:32:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.093 09:32:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:23:42.093 "tick_rate": 2400000000, 00:23:42.093 "poll_groups": [ 00:23:42.093 { 00:23:42.093 "name": "nvmf_tgt_poll_group_000", 00:23:42.093 "admin_qpairs": 1, 00:23:42.093 "io_qpairs": 1, 00:23:42.093 "current_admin_qpairs": 1, 00:23:42.093 "current_io_qpairs": 1, 00:23:42.093 "pending_bdev_io": 0, 00:23:42.093 "completed_nvme_io": 20400, 00:23:42.093 "transports": [ 00:23:42.093 { 00:23:42.093 "trtype": "TCP" 00:23:42.093 } 00:23:42.093 ] 00:23:42.093 }, 00:23:42.093 { 00:23:42.093 "name": "nvmf_tgt_poll_group_001", 00:23:42.093 "admin_qpairs": 0, 00:23:42.093 "io_qpairs": 1, 00:23:42.093 "current_admin_qpairs": 0, 00:23:42.093 "current_io_qpairs": 1, 00:23:42.093 "pending_bdev_io": 0, 00:23:42.093 "completed_nvme_io": 28921, 00:23:42.093 "transports": [ 00:23:42.093 { 00:23:42.093 "trtype": "TCP" 00:23:42.093 } 00:23:42.093 ] 00:23:42.093 }, 00:23:42.093 { 00:23:42.093 "name": "nvmf_tgt_poll_group_002", 00:23:42.093 "admin_qpairs": 0, 00:23:42.093 "io_qpairs": 1, 00:23:42.093 "current_admin_qpairs": 0, 00:23:42.093 "current_io_qpairs": 1, 00:23:42.093 "pending_bdev_io": 0, 00:23:42.093 "completed_nvme_io": 20582, 00:23:42.093 "transports": [ 00:23:42.093 { 00:23:42.093 "trtype": "TCP" 00:23:42.093 } 00:23:42.093 ] 00:23:42.093 }, 00:23:42.093 { 00:23:42.093 "name": "nvmf_tgt_poll_group_003", 00:23:42.093 "admin_qpairs": 0, 00:23:42.093 "io_qpairs": 1, 00:23:42.093 "current_admin_qpairs": 0, 00:23:42.093 "current_io_qpairs": 1, 00:23:42.093 "pending_bdev_io": 0, 00:23:42.093 "completed_nvme_io": 20928, 00:23:42.093 "transports": [ 00:23:42.093 { 00:23:42.093 "trtype": "TCP" 00:23:42.093 } 00:23:42.093 ] 00:23:42.093 } 00:23:42.093 ] 00:23:42.093 }' 00:23:42.093 09:32:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:42.093 09:32:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:23:42.354 09:32:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:23:42.354 09:32:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:23:42.354 09:32:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 765128 00:23:50.493 Initializing NVMe Controllers 00:23:50.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:50.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:50.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:50.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:50.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:50.493 Initialization complete. Launching workers. 00:23:50.493 ======================================================== 00:23:50.493 Latency(us) 00:23:50.493 Device Information : IOPS MiB/s Average min max 00:23:50.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11423.02 44.62 5603.54 1345.22 9025.63 00:23:50.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14866.40 58.07 4304.64 1365.27 8861.82 00:23:50.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14181.60 55.40 4513.17 1232.53 11960.38 00:23:50.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13199.81 51.56 4848.79 1161.71 11796.52 00:23:50.493 ======================================================== 00:23:50.493 Total : 53670.84 209.65 4770.02 1161.71 11960.38 00:23:50.493 00:23:50.493 09:32:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:23:50.493 09:32:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:50.493 09:32:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:50.493 09:32:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:50.493 09:32:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:50.493 09:32:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:50.493 09:32:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:50.493 rmmod nvme_tcp 00:23:50.493 rmmod nvme_fabrics 00:23:50.493 rmmod nvme_keyring 00:23:50.493 09:32:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:50.493 09:32:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:50.493 09:32:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:50.493 09:32:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 764992 ']' 00:23:50.493 09:32:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 764992 00:23:50.493 09:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 764992 ']' 00:23:50.493 09:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 764992 00:23:50.493 09:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:50.493 09:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:50.493 09:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 764992 00:23:50.494 09:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:50.494 09:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:50.494 09:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 764992' 00:23:50.494 killing process with pid 764992 00:23:50.494 09:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 764992 00:23:50.494 09:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 764992 00:23:50.494 09:32:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:50.494 09:32:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:50.494 09:32:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:50.494 09:32:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:50.494 09:32:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:50.494 09:32:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.494 09:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.494 09:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.037 09:32:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:53.037 09:32:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:23:53.037 09:32:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:54.421 09:32:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:55.806 09:32:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:01.097 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:01.098 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:01.098 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:01.098 Found net devices under 0000:31:00.0: cvl_0_0 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:01.098 Found net devices under 0000:31:00.1: cvl_0_1 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:01.098 09:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:01.098 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.098 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.098 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.098 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.098 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:01.098 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.098 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.098 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.098 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:01.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:24:01.098 00:24:01.098 --- 10.0.0.2 ping statistics --- 00:24:01.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.098 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:24:01.098 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:24:01.098 00:24:01.098 --- 10.0.0.1 ping statistics --- 00:24:01.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.098 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:24:01.098 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.098 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:24:01.098 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:01.098 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.098 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:01.098 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:01.098 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.098 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:01.098 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:01.360 09:32:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:24:01.360 09:32:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:01.360 09:32:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:01.360 09:32:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:01.360 net.core.busy_poll = 1 00:24:01.360 09:32:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:01.360 net.core.busy_read = 1 00:24:01.360 09:32:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:01.360 09:32:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:01.360 09:32:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:01.360 09:32:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:01.360 09:32:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:01.621 09:32:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:01.621 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.621 09:32:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:01.621 09:32:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.621 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=769770 00:24:01.621 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 769770 00:24:01.622 09:32:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 769770 ']' 00:24:01.622 09:32:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.622 09:32:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.622 09:32:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.622 09:32:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.622 09:32:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.622 09:32:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:01.622 [2024-07-15 09:32:48.642084] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:01.622 [2024-07-15 09:32:48.642149] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.622 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.622 [2024-07-15 09:32:48.720422] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:01.622 [2024-07-15 09:32:48.795527] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.622 [2024-07-15 09:32:48.795565] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.622 [2024-07-15 09:32:48.795573] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.622 [2024-07-15 09:32:48.795579] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.622 [2024-07-15 09:32:48.795585] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.622 [2024-07-15 09:32:48.795729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.622 [2024-07-15 09:32:48.795943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.622 [2024-07-15 09:32:48.795944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.622 [2024-07-15 09:32:48.795847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.562 [2024-07-15 09:32:49.594060] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.562 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.563 Malloc1 00:24:02.563 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.563 09:32:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:02.563 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.563 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.563 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.563 09:32:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:02.563 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.563 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.563 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.563 09:32:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.563 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.563 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.563 [2024-07-15 09:32:49.653409] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.563 09:32:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.563 09:32:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=769843 00:24:02.563 09:32:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:24:02.563 09:32:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:02.563 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.564 09:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:24:04.564 09:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.564 09:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.564 09:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.564 09:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:24:04.564 "tick_rate": 2400000000, 00:24:04.564 "poll_groups": [ 00:24:04.564 { 00:24:04.565 "name": "nvmf_tgt_poll_group_000", 00:24:04.565 "admin_qpairs": 1, 00:24:04.565 "io_qpairs": 0, 00:24:04.565 "current_admin_qpairs": 1, 00:24:04.565 "current_io_qpairs": 0, 00:24:04.565 "pending_bdev_io": 0, 00:24:04.565 "completed_nvme_io": 0, 00:24:04.565 "transports": [ 00:24:04.565 { 00:24:04.565 "trtype": "TCP" 00:24:04.565 } 00:24:04.565 ] 00:24:04.565 }, 00:24:04.565 { 00:24:04.565 "name": "nvmf_tgt_poll_group_001", 00:24:04.565 "admin_qpairs": 0, 00:24:04.565 "io_qpairs": 4, 00:24:04.565 "current_admin_qpairs": 0, 00:24:04.565 "current_io_qpairs": 4, 00:24:04.565 "pending_bdev_io": 0, 00:24:04.565 "completed_nvme_io": 51560, 00:24:04.565 "transports": [ 00:24:04.565 { 00:24:04.565 "trtype": "TCP" 00:24:04.565 } 00:24:04.565 ] 00:24:04.565 }, 00:24:04.565 { 00:24:04.565 "name": "nvmf_tgt_poll_group_002", 00:24:04.565 "admin_qpairs": 0, 00:24:04.565 "io_qpairs": 0, 00:24:04.565 "current_admin_qpairs": 0, 00:24:04.565 "current_io_qpairs": 0, 00:24:04.565 "pending_bdev_io": 0, 00:24:04.565 "completed_nvme_io": 0, 00:24:04.565 "transports": [ 00:24:04.565 { 00:24:04.565 "trtype": "TCP" 00:24:04.565 } 00:24:04.565 ] 00:24:04.565 }, 00:24:04.565 { 00:24:04.565 "name": "nvmf_tgt_poll_group_003", 00:24:04.565 "admin_qpairs": 0, 00:24:04.565 "io_qpairs": 0, 00:24:04.565 "current_admin_qpairs": 0, 00:24:04.565 "current_io_qpairs": 0, 00:24:04.565 "pending_bdev_io": 0, 00:24:04.565 "completed_nvme_io": 0, 00:24:04.565 "transports": [ 00:24:04.565 { 00:24:04.565 "trtype": "TCP" 00:24:04.565 } 00:24:04.565 ] 00:24:04.565 } 00:24:04.565 ] 00:24:04.565 }' 00:24:04.565 09:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:04.565 09:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:24:04.565 09:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:24:04.565 09:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:24:04.565 09:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 769843 00:24:12.706 Initializing NVMe Controllers 00:24:12.706 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:12.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:12.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:12.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:12.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:12.706 Initialization complete. Launching workers. 00:24:12.706 ======================================================== 00:24:12.706 Latency(us) 00:24:12.706 Device Information : IOPS MiB/s Average min max 00:24:12.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6780.10 26.48 9440.62 1244.75 54043.80 00:24:12.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7618.90 29.76 8425.34 1240.90 54604.24 00:24:12.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6486.90 25.34 9895.60 1267.29 53779.61 00:24:12.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6496.10 25.38 9881.76 1243.31 54308.37 00:24:12.706 ======================================================== 00:24:12.706 Total : 27381.99 106.96 9370.57 1240.90 54604.24 00:24:12.706 00:24:12.706 09:32:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:24:12.706 09:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:12.706 09:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:24:12.706 09:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:12.706 09:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:24:12.706 09:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:12.706 09:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:12.706 rmmod nvme_tcp 00:24:12.706 rmmod nvme_fabrics 00:24:12.966 rmmod nvme_keyring 00:24:12.966 09:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:12.966 09:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:24:12.966 09:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:24:12.966 09:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 769770 ']' 00:24:12.966 09:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 769770 00:24:12.966 09:32:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 769770 ']' 00:24:12.966 09:32:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 769770 00:24:12.966 09:32:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:24:12.966 09:32:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:12.966 09:32:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 769770 00:24:12.966 09:32:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:12.966 09:32:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:12.966 09:32:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 769770' 00:24:12.966 killing process with pid 769770 00:24:12.966 09:32:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 769770 00:24:12.966 09:32:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 769770 00:24:12.966 09:33:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.966 09:33:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:12.966 09:33:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:12.966 09:33:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.966 09:33:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:12.966 09:33:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.966 09:33:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.966 09:33:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.262 09:33:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:16.262 09:33:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:24:16.262 00:24:16.262 real 0m53.801s 00:24:16.262 user 2m49.838s 00:24:16.262 sys 0m11.263s 00:24:16.262 09:33:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:16.262 09:33:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:16.262 ************************************ 00:24:16.262 END TEST nvmf_perf_adq 00:24:16.262 ************************************ 00:24:16.262 09:33:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:16.262 09:33:03 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:16.262 09:33:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:16.262 09:33:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:16.262 09:33:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:16.262 ************************************ 00:24:16.262 START TEST nvmf_shutdown 00:24:16.262 ************************************ 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:16.262 * Looking for test storage... 00:24:16.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.262 09:33:03 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:16.263 09:33:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:16.523 ************************************ 00:24:16.523 START TEST nvmf_shutdown_tc1 00:24:16.523 ************************************ 00:24:16.523 09:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:24:16.523 09:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:24:16.523 09:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:16.523 09:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:16.523 09:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.523 09:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:16.523 09:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:16.523 09:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:16.523 09:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.523 09:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:16.523 09:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.523 09:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:16.523 09:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:16.523 09:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:16.523 09:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:24.672 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:24.672 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:24.672 Found net devices under 0000:31:00.0: cvl_0_0 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:24.672 Found net devices under 0000:31:00.1: cvl_0_1 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:24.672 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:24.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:24:24.673 00:24:24.673 --- 10.0.0.2 ping statistics --- 00:24:24.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.673 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:24:24.673 00:24:24.673 --- 10.0.0.1 ping statistics --- 00:24:24.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.673 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=777521 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 777521 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 777521 ']' 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:24.673 09:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:24.673 [2024-07-15 09:33:11.594358] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:24.673 [2024-07-15 09:33:11.594422] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.673 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.673 [2024-07-15 09:33:11.688689] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:24.673 [2024-07-15 09:33:11.782523] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.673 [2024-07-15 09:33:11.782583] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.673 [2024-07-15 09:33:11.782591] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.673 [2024-07-15 09:33:11.782598] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.673 [2024-07-15 09:33:11.782604] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.673 [2024-07-15 09:33:11.782740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.673 [2024-07-15 09:33:11.782908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:24.673 [2024-07-15 09:33:11.783050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.673 [2024-07-15 09:33:11.783051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:25.244 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:25.244 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:24:25.244 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:25.244 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:25.244 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:25.244 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.244 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:25.244 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.244 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:25.244 [2024-07-15 09:33:12.416260] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.244 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.244 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:25.244 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:25.244 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:25.244 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:25.244 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:25.244 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:25.244 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:25.244 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:25.244 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:25.505 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:25.505 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:25.505 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:25.505 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:25.505 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:25.505 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:25.505 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:25.505 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:25.505 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:25.505 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:25.505 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:25.505 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:25.505 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:25.505 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:25.505 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:25.505 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:25.505 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:25.505 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.505 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:25.505 Malloc1 00:24:25.505 [2024-07-15 09:33:12.517171] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.505 Malloc2 00:24:25.505 Malloc3 00:24:25.505 Malloc4 00:24:25.505 Malloc5 00:24:25.505 Malloc6 00:24:25.767 Malloc7 00:24:25.767 Malloc8 00:24:25.767 Malloc9 00:24:25.767 Malloc10 00:24:25.767 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.767 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:25.767 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:25.767 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:25.767 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=777754 00:24:25.767 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 777754 /var/tmp/bdevperf.sock 00:24:25.767 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 777754 ']' 00:24:25.767 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:25.767 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:25.767 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:25.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:25.767 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:25.767 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:25.767 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:25.768 { 00:24:25.768 "params": { 00:24:25.768 "name": "Nvme$subsystem", 00:24:25.768 "trtype": "$TEST_TRANSPORT", 00:24:25.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.768 "adrfam": "ipv4", 00:24:25.768 "trsvcid": "$NVMF_PORT", 00:24:25.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.768 "hdgst": ${hdgst:-false}, 00:24:25.768 "ddgst": ${ddgst:-false} 00:24:25.768 }, 00:24:25.768 "method": "bdev_nvme_attach_controller" 00:24:25.768 } 00:24:25.768 EOF 00:24:25.768 )") 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:25.768 { 00:24:25.768 "params": { 00:24:25.768 "name": "Nvme$subsystem", 00:24:25.768 "trtype": "$TEST_TRANSPORT", 00:24:25.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.768 "adrfam": "ipv4", 00:24:25.768 "trsvcid": "$NVMF_PORT", 00:24:25.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.768 "hdgst": ${hdgst:-false}, 00:24:25.768 "ddgst": ${ddgst:-false} 00:24:25.768 }, 00:24:25.768 "method": "bdev_nvme_attach_controller" 00:24:25.768 } 00:24:25.768 EOF 00:24:25.768 )") 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:25.768 { 00:24:25.768 "params": { 00:24:25.768 "name": "Nvme$subsystem", 00:24:25.768 "trtype": "$TEST_TRANSPORT", 00:24:25.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.768 "adrfam": "ipv4", 00:24:25.768 "trsvcid": "$NVMF_PORT", 00:24:25.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.768 "hdgst": ${hdgst:-false}, 00:24:25.768 "ddgst": ${ddgst:-false} 00:24:25.768 }, 00:24:25.768 "method": "bdev_nvme_attach_controller" 00:24:25.768 } 00:24:25.768 EOF 00:24:25.768 )") 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:25.768 { 00:24:25.768 "params": { 00:24:25.768 "name": "Nvme$subsystem", 00:24:25.768 "trtype": "$TEST_TRANSPORT", 00:24:25.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.768 "adrfam": "ipv4", 00:24:25.768 "trsvcid": "$NVMF_PORT", 00:24:25.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.768 "hdgst": ${hdgst:-false}, 00:24:25.768 "ddgst": ${ddgst:-false} 00:24:25.768 }, 00:24:25.768 "method": "bdev_nvme_attach_controller" 00:24:25.768 } 00:24:25.768 EOF 00:24:25.768 )") 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:25.768 { 00:24:25.768 "params": { 00:24:25.768 "name": "Nvme$subsystem", 00:24:25.768 "trtype": "$TEST_TRANSPORT", 00:24:25.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.768 "adrfam": "ipv4", 00:24:25.768 "trsvcid": "$NVMF_PORT", 00:24:25.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.768 "hdgst": ${hdgst:-false}, 00:24:25.768 "ddgst": ${ddgst:-false} 00:24:25.768 }, 00:24:25.768 "method": "bdev_nvme_attach_controller" 00:24:25.768 } 00:24:25.768 EOF 00:24:25.768 )") 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:25.768 { 00:24:25.768 "params": { 00:24:25.768 "name": "Nvme$subsystem", 00:24:25.768 "trtype": "$TEST_TRANSPORT", 00:24:25.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.768 "adrfam": "ipv4", 00:24:25.768 "trsvcid": "$NVMF_PORT", 00:24:25.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.768 "hdgst": ${hdgst:-false}, 00:24:25.768 "ddgst": ${ddgst:-false} 00:24:25.768 }, 00:24:25.768 "method": "bdev_nvme_attach_controller" 00:24:25.768 } 00:24:25.768 EOF 00:24:25.768 )") 00:24:25.768 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:26.031 [2024-07-15 09:33:12.967427] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:26.031 [2024-07-15 09:33:12.967478] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:26.031 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:26.031 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:26.031 { 00:24:26.031 "params": { 00:24:26.031 "name": "Nvme$subsystem", 00:24:26.031 "trtype": "$TEST_TRANSPORT", 00:24:26.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.031 "adrfam": "ipv4", 00:24:26.031 "trsvcid": "$NVMF_PORT", 00:24:26.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.031 "hdgst": ${hdgst:-false}, 00:24:26.031 "ddgst": ${ddgst:-false} 00:24:26.031 }, 00:24:26.031 "method": "bdev_nvme_attach_controller" 00:24:26.031 } 00:24:26.031 EOF 00:24:26.031 )") 00:24:26.031 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:26.031 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:26.031 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:26.031 { 00:24:26.031 "params": { 00:24:26.031 "name": "Nvme$subsystem", 00:24:26.031 "trtype": "$TEST_TRANSPORT", 00:24:26.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.031 "adrfam": "ipv4", 00:24:26.031 "trsvcid": "$NVMF_PORT", 00:24:26.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.031 "hdgst": ${hdgst:-false}, 00:24:26.031 "ddgst": ${ddgst:-false} 00:24:26.031 }, 00:24:26.031 "method": "bdev_nvme_attach_controller" 00:24:26.031 } 00:24:26.031 EOF 00:24:26.031 )") 00:24:26.031 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:26.031 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:26.031 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:26.031 { 00:24:26.031 "params": { 00:24:26.031 "name": "Nvme$subsystem", 00:24:26.031 "trtype": "$TEST_TRANSPORT", 00:24:26.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.031 "adrfam": "ipv4", 00:24:26.031 "trsvcid": "$NVMF_PORT", 00:24:26.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.031 "hdgst": ${hdgst:-false}, 00:24:26.031 "ddgst": ${ddgst:-false} 00:24:26.031 }, 00:24:26.031 "method": "bdev_nvme_attach_controller" 00:24:26.031 } 00:24:26.031 EOF 00:24:26.031 )") 00:24:26.031 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:26.031 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:26.031 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:26.031 { 00:24:26.031 "params": { 00:24:26.031 "name": "Nvme$subsystem", 00:24:26.031 "trtype": "$TEST_TRANSPORT", 00:24:26.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.031 "adrfam": "ipv4", 00:24:26.031 "trsvcid": "$NVMF_PORT", 00:24:26.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.031 "hdgst": ${hdgst:-false}, 00:24:26.031 "ddgst": ${ddgst:-false} 00:24:26.031 }, 00:24:26.031 "method": "bdev_nvme_attach_controller" 00:24:26.031 } 00:24:26.031 EOF 00:24:26.031 )") 00:24:26.031 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:26.031 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.031 09:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:24:26.031 09:33:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:24:26.031 09:33:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:26.031 "params": { 00:24:26.031 "name": "Nvme1", 00:24:26.031 "trtype": "tcp", 00:24:26.031 "traddr": "10.0.0.2", 00:24:26.031 "adrfam": "ipv4", 00:24:26.031 "trsvcid": "4420", 00:24:26.031 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.031 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:26.031 "hdgst": false, 00:24:26.031 "ddgst": false 00:24:26.031 }, 00:24:26.031 "method": "bdev_nvme_attach_controller" 00:24:26.031 },{ 00:24:26.031 "params": { 00:24:26.031 "name": "Nvme2", 00:24:26.031 "trtype": "tcp", 00:24:26.031 "traddr": "10.0.0.2", 00:24:26.031 "adrfam": "ipv4", 00:24:26.031 "trsvcid": "4420", 00:24:26.031 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:26.031 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:26.031 "hdgst": false, 00:24:26.031 "ddgst": false 00:24:26.031 }, 00:24:26.031 "method": "bdev_nvme_attach_controller" 00:24:26.031 },{ 00:24:26.031 "params": { 00:24:26.031 "name": "Nvme3", 00:24:26.031 "trtype": "tcp", 00:24:26.031 "traddr": "10.0.0.2", 00:24:26.031 "adrfam": "ipv4", 00:24:26.031 "trsvcid": "4420", 00:24:26.031 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:26.031 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:26.031 "hdgst": false, 00:24:26.031 "ddgst": false 00:24:26.031 }, 00:24:26.031 "method": "bdev_nvme_attach_controller" 00:24:26.031 },{ 00:24:26.031 "params": { 00:24:26.031 "name": "Nvme4", 00:24:26.031 "trtype": "tcp", 00:24:26.031 "traddr": "10.0.0.2", 00:24:26.031 "adrfam": "ipv4", 00:24:26.031 "trsvcid": "4420", 00:24:26.031 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:26.031 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:26.031 "hdgst": false, 00:24:26.031 "ddgst": false 00:24:26.031 }, 00:24:26.031 "method": "bdev_nvme_attach_controller" 00:24:26.031 },{ 00:24:26.031 "params": { 00:24:26.031 "name": "Nvme5", 00:24:26.031 "trtype": "tcp", 00:24:26.031 "traddr": "10.0.0.2", 00:24:26.031 "adrfam": "ipv4", 00:24:26.031 "trsvcid": "4420", 00:24:26.031 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:26.031 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:26.031 "hdgst": false, 00:24:26.031 "ddgst": false 00:24:26.031 }, 00:24:26.031 "method": "bdev_nvme_attach_controller" 00:24:26.031 },{ 00:24:26.031 "params": { 00:24:26.031 "name": "Nvme6", 00:24:26.031 "trtype": "tcp", 00:24:26.031 "traddr": "10.0.0.2", 00:24:26.031 "adrfam": "ipv4", 00:24:26.031 "trsvcid": "4420", 00:24:26.031 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:26.031 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:26.031 "hdgst": false, 00:24:26.031 "ddgst": false 00:24:26.031 }, 00:24:26.031 "method": "bdev_nvme_attach_controller" 00:24:26.031 },{ 00:24:26.031 "params": { 00:24:26.031 "name": "Nvme7", 00:24:26.031 "trtype": "tcp", 00:24:26.031 "traddr": "10.0.0.2", 00:24:26.031 "adrfam": "ipv4", 00:24:26.031 "trsvcid": "4420", 00:24:26.031 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:26.031 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:26.031 "hdgst": false, 00:24:26.031 "ddgst": false 00:24:26.031 }, 00:24:26.031 "method": "bdev_nvme_attach_controller" 00:24:26.031 },{ 00:24:26.031 "params": { 00:24:26.031 "name": "Nvme8", 00:24:26.031 "trtype": "tcp", 00:24:26.031 "traddr": "10.0.0.2", 00:24:26.031 "adrfam": "ipv4", 00:24:26.031 "trsvcid": "4420", 00:24:26.031 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:26.031 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:26.031 "hdgst": false, 00:24:26.031 "ddgst": false 00:24:26.031 }, 00:24:26.031 "method": "bdev_nvme_attach_controller" 00:24:26.031 },{ 00:24:26.031 "params": { 00:24:26.031 "name": "Nvme9", 00:24:26.031 "trtype": "tcp", 00:24:26.031 "traddr": "10.0.0.2", 00:24:26.031 "adrfam": "ipv4", 00:24:26.031 "trsvcid": "4420", 00:24:26.031 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:26.031 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:26.031 "hdgst": false, 00:24:26.031 "ddgst": false 00:24:26.031 }, 00:24:26.031 "method": "bdev_nvme_attach_controller" 00:24:26.031 },{ 00:24:26.031 "params": { 00:24:26.031 "name": "Nvme10", 00:24:26.031 "trtype": "tcp", 00:24:26.031 "traddr": "10.0.0.2", 00:24:26.031 "adrfam": "ipv4", 00:24:26.031 "trsvcid": "4420", 00:24:26.031 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:26.031 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:26.031 "hdgst": false, 00:24:26.031 "ddgst": false 00:24:26.031 }, 00:24:26.031 "method": "bdev_nvme_attach_controller" 00:24:26.031 }' 00:24:26.031 [2024-07-15 09:33:13.034508] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.031 [2024-07-15 09:33:13.099003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.417 09:33:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:27.417 09:33:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:24:27.417 09:33:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:27.417 09:33:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.417 09:33:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:27.417 09:33:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.417 09:33:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 777754 00:24:27.417 09:33:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:27.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 777754 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:27.417 09:33:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 777521 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:28.361 { 00:24:28.361 "params": { 00:24:28.361 "name": "Nvme$subsystem", 00:24:28.361 "trtype": "$TEST_TRANSPORT", 00:24:28.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.361 "adrfam": "ipv4", 00:24:28.361 "trsvcid": "$NVMF_PORT", 00:24:28.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.361 "hdgst": ${hdgst:-false}, 00:24:28.361 "ddgst": ${ddgst:-false} 00:24:28.361 }, 00:24:28.361 "method": "bdev_nvme_attach_controller" 00:24:28.361 } 00:24:28.361 EOF 00:24:28.361 )") 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:28.361 { 00:24:28.361 "params": { 00:24:28.361 "name": "Nvme$subsystem", 00:24:28.361 "trtype": "$TEST_TRANSPORT", 00:24:28.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.361 "adrfam": "ipv4", 00:24:28.361 "trsvcid": "$NVMF_PORT", 00:24:28.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.361 "hdgst": ${hdgst:-false}, 00:24:28.361 "ddgst": ${ddgst:-false} 00:24:28.361 }, 00:24:28.361 "method": "bdev_nvme_attach_controller" 00:24:28.361 } 00:24:28.361 EOF 00:24:28.361 )") 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:28.361 { 00:24:28.361 "params": { 00:24:28.361 "name": "Nvme$subsystem", 00:24:28.361 "trtype": "$TEST_TRANSPORT", 00:24:28.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.361 "adrfam": "ipv4", 00:24:28.361 "trsvcid": "$NVMF_PORT", 00:24:28.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.361 "hdgst": ${hdgst:-false}, 00:24:28.361 "ddgst": ${ddgst:-false} 00:24:28.361 }, 00:24:28.361 "method": "bdev_nvme_attach_controller" 00:24:28.361 } 00:24:28.361 EOF 00:24:28.361 )") 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:28.361 { 00:24:28.361 "params": { 00:24:28.361 "name": "Nvme$subsystem", 00:24:28.361 "trtype": "$TEST_TRANSPORT", 00:24:28.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.361 "adrfam": "ipv4", 00:24:28.361 "trsvcid": "$NVMF_PORT", 00:24:28.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.361 "hdgst": ${hdgst:-false}, 00:24:28.361 "ddgst": ${ddgst:-false} 00:24:28.361 }, 00:24:28.361 "method": "bdev_nvme_attach_controller" 00:24:28.361 } 00:24:28.361 EOF 00:24:28.361 )") 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:28.361 { 00:24:28.361 "params": { 00:24:28.361 "name": "Nvme$subsystem", 00:24:28.361 "trtype": "$TEST_TRANSPORT", 00:24:28.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.361 "adrfam": "ipv4", 00:24:28.361 "trsvcid": "$NVMF_PORT", 00:24:28.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.361 "hdgst": ${hdgst:-false}, 00:24:28.361 "ddgst": ${ddgst:-false} 00:24:28.361 }, 00:24:28.361 "method": "bdev_nvme_attach_controller" 00:24:28.361 } 00:24:28.361 EOF 00:24:28.361 )") 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:28.361 { 00:24:28.361 "params": { 00:24:28.361 "name": "Nvme$subsystem", 00:24:28.361 "trtype": "$TEST_TRANSPORT", 00:24:28.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.361 "adrfam": "ipv4", 00:24:28.361 "trsvcid": "$NVMF_PORT", 00:24:28.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.361 "hdgst": ${hdgst:-false}, 00:24:28.361 "ddgst": ${ddgst:-false} 00:24:28.361 }, 00:24:28.361 "method": "bdev_nvme_attach_controller" 00:24:28.361 } 00:24:28.361 EOF 00:24:28.361 )") 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:28.361 { 00:24:28.361 "params": { 00:24:28.361 "name": "Nvme$subsystem", 00:24:28.361 "trtype": "$TEST_TRANSPORT", 00:24:28.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.361 "adrfam": "ipv4", 00:24:28.361 "trsvcid": "$NVMF_PORT", 00:24:28.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.361 "hdgst": ${hdgst:-false}, 00:24:28.361 "ddgst": ${ddgst:-false} 00:24:28.361 }, 00:24:28.361 "method": "bdev_nvme_attach_controller" 00:24:28.361 } 00:24:28.361 EOF 00:24:28.361 )") 00:24:28.361 [2024-07-15 09:33:15.508900] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:28.361 [2024-07-15 09:33:15.508953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778289 ] 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:28.361 { 00:24:28.361 "params": { 00:24:28.361 "name": "Nvme$subsystem", 00:24:28.361 "trtype": "$TEST_TRANSPORT", 00:24:28.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.361 "adrfam": "ipv4", 00:24:28.361 "trsvcid": "$NVMF_PORT", 00:24:28.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.361 "hdgst": ${hdgst:-false}, 00:24:28.361 "ddgst": ${ddgst:-false} 00:24:28.361 }, 00:24:28.361 "method": "bdev_nvme_attach_controller" 00:24:28.361 } 00:24:28.361 EOF 00:24:28.361 )") 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:28.361 { 00:24:28.361 "params": { 00:24:28.361 "name": "Nvme$subsystem", 00:24:28.361 "trtype": "$TEST_TRANSPORT", 00:24:28.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.361 "adrfam": "ipv4", 00:24:28.361 "trsvcid": "$NVMF_PORT", 00:24:28.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.361 "hdgst": ${hdgst:-false}, 00:24:28.361 "ddgst": ${ddgst:-false} 00:24:28.361 }, 00:24:28.361 "method": "bdev_nvme_attach_controller" 00:24:28.361 } 00:24:28.361 EOF 00:24:28.361 )") 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:28.361 { 00:24:28.361 "params": { 00:24:28.361 "name": "Nvme$subsystem", 00:24:28.361 "trtype": "$TEST_TRANSPORT", 00:24:28.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.361 "adrfam": "ipv4", 00:24:28.361 "trsvcid": "$NVMF_PORT", 00:24:28.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.361 "hdgst": ${hdgst:-false}, 00:24:28.361 "ddgst": ${ddgst:-false} 00:24:28.361 }, 00:24:28.361 "method": "bdev_nvme_attach_controller" 00:24:28.361 } 00:24:28.361 EOF 00:24:28.361 )") 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:24:28.361 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:24:28.361 09:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:28.361 "params": { 00:24:28.361 "name": "Nvme1", 00:24:28.361 "trtype": "tcp", 00:24:28.361 "traddr": "10.0.0.2", 00:24:28.361 "adrfam": "ipv4", 00:24:28.361 "trsvcid": "4420", 00:24:28.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:28.361 "hdgst": false, 00:24:28.361 "ddgst": false 00:24:28.361 }, 00:24:28.361 "method": "bdev_nvme_attach_controller" 00:24:28.361 },{ 00:24:28.362 "params": { 00:24:28.362 "name": "Nvme2", 00:24:28.362 "trtype": "tcp", 00:24:28.362 "traddr": "10.0.0.2", 00:24:28.362 "adrfam": "ipv4", 00:24:28.362 "trsvcid": "4420", 00:24:28.362 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:28.362 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:28.362 "hdgst": false, 00:24:28.362 "ddgst": false 00:24:28.362 }, 00:24:28.362 "method": "bdev_nvme_attach_controller" 00:24:28.362 },{ 00:24:28.362 "params": { 00:24:28.362 "name": "Nvme3", 00:24:28.362 "trtype": "tcp", 00:24:28.362 "traddr": "10.0.0.2", 00:24:28.362 "adrfam": "ipv4", 00:24:28.362 "trsvcid": "4420", 00:24:28.362 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:28.362 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:28.362 "hdgst": false, 00:24:28.362 "ddgst": false 00:24:28.362 }, 00:24:28.362 "method": "bdev_nvme_attach_controller" 00:24:28.362 },{ 00:24:28.362 "params": { 00:24:28.362 "name": "Nvme4", 00:24:28.362 "trtype": "tcp", 00:24:28.362 "traddr": "10.0.0.2", 00:24:28.362 "adrfam": "ipv4", 00:24:28.362 "trsvcid": "4420", 00:24:28.362 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:28.362 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:28.362 "hdgst": false, 00:24:28.362 "ddgst": false 00:24:28.362 }, 00:24:28.362 "method": "bdev_nvme_attach_controller" 00:24:28.362 },{ 00:24:28.362 "params": { 00:24:28.362 "name": "Nvme5", 00:24:28.362 "trtype": "tcp", 00:24:28.362 "traddr": "10.0.0.2", 00:24:28.362 "adrfam": "ipv4", 00:24:28.362 "trsvcid": "4420", 00:24:28.362 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:28.362 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:28.362 "hdgst": false, 00:24:28.362 "ddgst": false 00:24:28.362 }, 00:24:28.362 "method": "bdev_nvme_attach_controller" 00:24:28.362 },{ 00:24:28.362 "params": { 00:24:28.362 "name": "Nvme6", 00:24:28.362 "trtype": "tcp", 00:24:28.362 "traddr": "10.0.0.2", 00:24:28.362 "adrfam": "ipv4", 00:24:28.362 "trsvcid": "4420", 00:24:28.362 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:28.362 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:28.362 "hdgst": false, 00:24:28.362 "ddgst": false 00:24:28.362 }, 00:24:28.362 "method": "bdev_nvme_attach_controller" 00:24:28.362 },{ 00:24:28.362 "params": { 00:24:28.362 "name": "Nvme7", 00:24:28.362 "trtype": "tcp", 00:24:28.362 "traddr": "10.0.0.2", 00:24:28.362 "adrfam": "ipv4", 00:24:28.362 "trsvcid": "4420", 00:24:28.362 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:28.362 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:28.362 "hdgst": false, 00:24:28.362 "ddgst": false 00:24:28.362 }, 00:24:28.362 "method": "bdev_nvme_attach_controller" 00:24:28.362 },{ 00:24:28.362 "params": { 00:24:28.362 "name": "Nvme8", 00:24:28.362 "trtype": "tcp", 00:24:28.362 "traddr": "10.0.0.2", 00:24:28.362 "adrfam": "ipv4", 00:24:28.362 "trsvcid": "4420", 00:24:28.362 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:28.362 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:28.362 "hdgst": false, 00:24:28.362 "ddgst": false 00:24:28.362 }, 00:24:28.362 "method": "bdev_nvme_attach_controller" 00:24:28.362 },{ 00:24:28.362 "params": { 00:24:28.362 "name": "Nvme9", 00:24:28.362 "trtype": "tcp", 00:24:28.362 "traddr": "10.0.0.2", 00:24:28.362 "adrfam": "ipv4", 00:24:28.362 "trsvcid": "4420", 00:24:28.362 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:28.362 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:28.362 "hdgst": false, 00:24:28.362 "ddgst": false 00:24:28.362 }, 00:24:28.362 "method": "bdev_nvme_attach_controller" 00:24:28.362 },{ 00:24:28.362 "params": { 00:24:28.362 "name": "Nvme10", 00:24:28.362 "trtype": "tcp", 00:24:28.362 "traddr": "10.0.0.2", 00:24:28.362 "adrfam": "ipv4", 00:24:28.362 "trsvcid": "4420", 00:24:28.362 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:28.362 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:28.362 "hdgst": false, 00:24:28.362 "ddgst": false 00:24:28.362 }, 00:24:28.362 "method": "bdev_nvme_attach_controller" 00:24:28.362 }' 00:24:28.623 [2024-07-15 09:33:15.576113] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.623 [2024-07-15 09:33:15.640371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.007 Running I/O for 1 seconds... 00:24:31.397 00:24:31.397 Latency(us) 00:24:31.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.397 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.397 Verification LBA range: start 0x0 length 0x400 00:24:31.397 Nvme1n1 : 1.17 218.88 13.68 0.00 0.00 289515.95 19333.12 248162.99 00:24:31.397 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.397 Verification LBA range: start 0x0 length 0x400 00:24:31.397 Nvme2n1 : 1.16 220.79 13.80 0.00 0.00 282232.11 21954.56 242920.11 00:24:31.397 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.397 Verification LBA range: start 0x0 length 0x400 00:24:31.397 Nvme3n1 : 1.10 243.91 15.24 0.00 0.00 240688.37 12069.55 244667.73 00:24:31.397 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.397 Verification LBA range: start 0x0 length 0x400 00:24:31.397 Nvme4n1 : 1.20 266.42 16.65 0.00 0.00 224610.13 27634.35 242920.11 00:24:31.397 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.397 Verification LBA range: start 0x0 length 0x400 00:24:31.397 Nvme5n1 : 1.11 230.77 14.42 0.00 0.00 255100.37 16056.32 249910.61 00:24:31.397 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.397 Verification LBA range: start 0x0 length 0x400 00:24:31.397 Nvme6n1 : 1.20 265.82 16.61 0.00 0.00 218886.14 13926.40 242920.11 00:24:31.397 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.397 Verification LBA range: start 0x0 length 0x400 00:24:31.397 Nvme7n1 : 1.21 263.48 16.47 0.00 0.00 216580.44 19114.67 244667.73 00:24:31.397 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.397 Verification LBA range: start 0x0 length 0x400 00:24:31.397 Nvme8n1 : 1.25 255.54 15.97 0.00 0.00 213433.09 13707.95 242920.11 00:24:31.397 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.397 Verification LBA range: start 0x0 length 0x400 00:24:31.397 Nvme9n1 : 1.20 213.39 13.34 0.00 0.00 258157.01 18896.21 274377.39 00:24:31.397 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.397 Verification LBA range: start 0x0 length 0x400 00:24:31.397 Nvme10n1 : 1.22 262.31 16.39 0.00 0.00 206984.19 11632.64 251658.24 00:24:31.397 =================================================================================================================== 00:24:31.397 Total : 2441.31 152.58 0.00 0.00 237906.89 11632.64 274377.39 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:31.397 rmmod nvme_tcp 00:24:31.397 rmmod nvme_fabrics 00:24:31.397 rmmod nvme_keyring 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 777521 ']' 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 777521 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 777521 ']' 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 777521 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 777521 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 777521' 00:24:31.397 killing process with pid 777521 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 777521 00:24:31.397 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 777521 00:24:31.658 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:31.658 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:31.658 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:31.658 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:31.658 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:31.658 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.658 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:31.658 09:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:34.196 00:24:34.196 real 0m17.311s 00:24:34.196 user 0m34.060s 00:24:34.196 sys 0m7.126s 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:34.196 ************************************ 00:24:34.196 END TEST nvmf_shutdown_tc1 00:24:34.196 ************************************ 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:34.196 ************************************ 00:24:34.196 START TEST nvmf_shutdown_tc2 00:24:34.196 ************************************ 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.196 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:34.197 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:34.197 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:34.197 Found net devices under 0000:31:00.0: cvl_0_0 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:34.197 Found net devices under 0000:31:00.1: cvl_0_1 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.197 09:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:34.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:24:34.197 00:24:34.197 --- 10.0.0.2 ping statistics --- 00:24:34.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.197 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:24:34.197 00:24:34.197 --- 10.0.0.1 ping statistics --- 00:24:34.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.197 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=779413 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 779413 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 779413 ']' 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:34.197 09:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:34.197 [2024-07-15 09:33:21.324721] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:34.197 [2024-07-15 09:33:21.324802] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.197 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.456 [2024-07-15 09:33:21.417062] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:34.456 [2024-07-15 09:33:21.479219] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.456 [2024-07-15 09:33:21.479256] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.456 [2024-07-15 09:33:21.479261] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.456 [2024-07-15 09:33:21.479265] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.456 [2024-07-15 09:33:21.479270] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.456 [2024-07-15 09:33:21.479380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.456 [2024-07-15 09:33:21.479541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.456 [2024-07-15 09:33:21.479700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.456 [2024-07-15 09:33:21.479702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:35.025 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:35.026 [2024-07-15 09:33:22.149125] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.026 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:35.285 Malloc1 00:24:35.285 [2024-07-15 09:33:22.247656] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.285 Malloc2 00:24:35.285 Malloc3 00:24:35.285 Malloc4 00:24:35.285 Malloc5 00:24:35.285 Malloc6 00:24:35.285 Malloc7 00:24:35.545 Malloc8 00:24:35.545 Malloc9 00:24:35.545 Malloc10 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=779783 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 779783 /var/tmp/bdevperf.sock 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 779783 ']' 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:35.545 { 00:24:35.545 "params": { 00:24:35.545 "name": "Nvme$subsystem", 00:24:35.545 "trtype": "$TEST_TRANSPORT", 00:24:35.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:35.545 "adrfam": "ipv4", 00:24:35.545 "trsvcid": "$NVMF_PORT", 00:24:35.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:35.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:35.545 "hdgst": ${hdgst:-false}, 00:24:35.545 "ddgst": ${ddgst:-false} 00:24:35.545 }, 00:24:35.545 "method": "bdev_nvme_attach_controller" 00:24:35.545 } 00:24:35.545 EOF 00:24:35.545 )") 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:35.545 { 00:24:35.545 "params": { 00:24:35.545 "name": "Nvme$subsystem", 00:24:35.545 "trtype": "$TEST_TRANSPORT", 00:24:35.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:35.545 "adrfam": "ipv4", 00:24:35.545 "trsvcid": "$NVMF_PORT", 00:24:35.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:35.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:35.545 "hdgst": ${hdgst:-false}, 00:24:35.545 "ddgst": ${ddgst:-false} 00:24:35.545 }, 00:24:35.545 "method": "bdev_nvme_attach_controller" 00:24:35.545 } 00:24:35.545 EOF 00:24:35.545 )") 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:35.545 { 00:24:35.545 "params": { 00:24:35.545 "name": "Nvme$subsystem", 00:24:35.545 "trtype": "$TEST_TRANSPORT", 00:24:35.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:35.545 "adrfam": "ipv4", 00:24:35.545 "trsvcid": "$NVMF_PORT", 00:24:35.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:35.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:35.545 "hdgst": ${hdgst:-false}, 00:24:35.545 "ddgst": ${ddgst:-false} 00:24:35.545 }, 00:24:35.545 "method": "bdev_nvme_attach_controller" 00:24:35.545 } 00:24:35.545 EOF 00:24:35.545 )") 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:35.545 { 00:24:35.545 "params": { 00:24:35.545 "name": "Nvme$subsystem", 00:24:35.545 "trtype": "$TEST_TRANSPORT", 00:24:35.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:35.545 "adrfam": "ipv4", 00:24:35.545 "trsvcid": "$NVMF_PORT", 00:24:35.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:35.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:35.545 "hdgst": ${hdgst:-false}, 00:24:35.545 "ddgst": ${ddgst:-false} 00:24:35.545 }, 00:24:35.545 "method": "bdev_nvme_attach_controller" 00:24:35.545 } 00:24:35.545 EOF 00:24:35.545 )") 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:35.545 { 00:24:35.545 "params": { 00:24:35.545 "name": "Nvme$subsystem", 00:24:35.545 "trtype": "$TEST_TRANSPORT", 00:24:35.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:35.545 "adrfam": "ipv4", 00:24:35.545 "trsvcid": "$NVMF_PORT", 00:24:35.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:35.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:35.545 "hdgst": ${hdgst:-false}, 00:24:35.545 "ddgst": ${ddgst:-false} 00:24:35.545 }, 00:24:35.545 "method": "bdev_nvme_attach_controller" 00:24:35.545 } 00:24:35.545 EOF 00:24:35.545 )") 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:35.545 { 00:24:35.545 "params": { 00:24:35.545 "name": "Nvme$subsystem", 00:24:35.545 "trtype": "$TEST_TRANSPORT", 00:24:35.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:35.545 "adrfam": "ipv4", 00:24:35.545 "trsvcid": "$NVMF_PORT", 00:24:35.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:35.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:35.545 "hdgst": ${hdgst:-false}, 00:24:35.545 "ddgst": ${ddgst:-false} 00:24:35.545 }, 00:24:35.545 "method": "bdev_nvme_attach_controller" 00:24:35.545 } 00:24:35.545 EOF 00:24:35.545 )") 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:35.545 { 00:24:35.545 "params": { 00:24:35.545 "name": "Nvme$subsystem", 00:24:35.545 "trtype": "$TEST_TRANSPORT", 00:24:35.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:35.545 "adrfam": "ipv4", 00:24:35.545 "trsvcid": "$NVMF_PORT", 00:24:35.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:35.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:35.545 "hdgst": ${hdgst:-false}, 00:24:35.545 "ddgst": ${ddgst:-false} 00:24:35.545 }, 00:24:35.545 "method": "bdev_nvme_attach_controller" 00:24:35.545 } 00:24:35.545 EOF 00:24:35.545 )") 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:35.545 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:35.545 { 00:24:35.546 "params": { 00:24:35.546 "name": "Nvme$subsystem", 00:24:35.546 "trtype": "$TEST_TRANSPORT", 00:24:35.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:35.546 "adrfam": "ipv4", 00:24:35.546 "trsvcid": "$NVMF_PORT", 00:24:35.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:35.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:35.546 "hdgst": ${hdgst:-false}, 00:24:35.546 "ddgst": ${ddgst:-false} 00:24:35.546 }, 00:24:35.546 "method": "bdev_nvme_attach_controller" 00:24:35.546 } 00:24:35.546 EOF 00:24:35.546 )") 00:24:35.546 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:35.546 [2024-07-15 09:33:22.703435] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:35.546 [2024-07-15 09:33:22.703484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779783 ] 00:24:35.546 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:35.546 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:35.546 { 00:24:35.546 "params": { 00:24:35.546 "name": "Nvme$subsystem", 00:24:35.546 "trtype": "$TEST_TRANSPORT", 00:24:35.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:35.546 "adrfam": "ipv4", 00:24:35.546 "trsvcid": "$NVMF_PORT", 00:24:35.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:35.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:35.546 "hdgst": ${hdgst:-false}, 00:24:35.546 "ddgst": ${ddgst:-false} 00:24:35.546 }, 00:24:35.546 "method": "bdev_nvme_attach_controller" 00:24:35.546 } 00:24:35.546 EOF 00:24:35.546 )") 00:24:35.546 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:35.546 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:35.546 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:35.546 { 00:24:35.546 "params": { 00:24:35.546 "name": "Nvme$subsystem", 00:24:35.546 "trtype": "$TEST_TRANSPORT", 00:24:35.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:35.546 "adrfam": "ipv4", 00:24:35.546 "trsvcid": "$NVMF_PORT", 00:24:35.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:35.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:35.546 "hdgst": ${hdgst:-false}, 00:24:35.546 "ddgst": ${ddgst:-false} 00:24:35.546 }, 00:24:35.546 "method": "bdev_nvme_attach_controller" 00:24:35.546 } 00:24:35.546 EOF 00:24:35.546 )") 00:24:35.546 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:35.546 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:24:35.546 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:24:35.546 09:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:35.546 "params": { 00:24:35.546 "name": "Nvme1", 00:24:35.546 "trtype": "tcp", 00:24:35.546 "traddr": "10.0.0.2", 00:24:35.546 "adrfam": "ipv4", 00:24:35.546 "trsvcid": "4420", 00:24:35.546 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.546 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:35.546 "hdgst": false, 00:24:35.546 "ddgst": false 00:24:35.546 }, 00:24:35.546 "method": "bdev_nvme_attach_controller" 00:24:35.546 },{ 00:24:35.546 "params": { 00:24:35.546 "name": "Nvme2", 00:24:35.546 "trtype": "tcp", 00:24:35.546 "traddr": "10.0.0.2", 00:24:35.546 "adrfam": "ipv4", 00:24:35.546 "trsvcid": "4420", 00:24:35.546 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:35.546 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:35.546 "hdgst": false, 00:24:35.546 "ddgst": false 00:24:35.546 }, 00:24:35.546 "method": "bdev_nvme_attach_controller" 00:24:35.546 },{ 00:24:35.546 "params": { 00:24:35.546 "name": "Nvme3", 00:24:35.546 "trtype": "tcp", 00:24:35.546 "traddr": "10.0.0.2", 00:24:35.546 "adrfam": "ipv4", 00:24:35.546 "trsvcid": "4420", 00:24:35.546 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:35.546 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:35.546 "hdgst": false, 00:24:35.546 "ddgst": false 00:24:35.546 }, 00:24:35.546 "method": "bdev_nvme_attach_controller" 00:24:35.546 },{ 00:24:35.546 "params": { 00:24:35.546 "name": "Nvme4", 00:24:35.546 "trtype": "tcp", 00:24:35.546 "traddr": "10.0.0.2", 00:24:35.546 "adrfam": "ipv4", 00:24:35.546 "trsvcid": "4420", 00:24:35.546 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:35.546 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:35.546 "hdgst": false, 00:24:35.546 "ddgst": false 00:24:35.546 }, 00:24:35.546 "method": "bdev_nvme_attach_controller" 00:24:35.546 },{ 00:24:35.546 "params": { 00:24:35.546 "name": "Nvme5", 00:24:35.546 "trtype": "tcp", 00:24:35.546 "traddr": "10.0.0.2", 00:24:35.546 "adrfam": "ipv4", 00:24:35.546 "trsvcid": "4420", 00:24:35.546 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:35.546 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:35.546 "hdgst": false, 00:24:35.546 "ddgst": false 00:24:35.546 }, 00:24:35.546 "method": "bdev_nvme_attach_controller" 00:24:35.546 },{ 00:24:35.546 "params": { 00:24:35.546 "name": "Nvme6", 00:24:35.546 "trtype": "tcp", 00:24:35.546 "traddr": "10.0.0.2", 00:24:35.546 "adrfam": "ipv4", 00:24:35.546 "trsvcid": "4420", 00:24:35.546 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:35.546 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:35.546 "hdgst": false, 00:24:35.546 "ddgst": false 00:24:35.546 }, 00:24:35.546 "method": "bdev_nvme_attach_controller" 00:24:35.546 },{ 00:24:35.546 "params": { 00:24:35.546 "name": "Nvme7", 00:24:35.546 "trtype": "tcp", 00:24:35.546 "traddr": "10.0.0.2", 00:24:35.546 "adrfam": "ipv4", 00:24:35.546 "trsvcid": "4420", 00:24:35.546 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:35.546 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:35.546 "hdgst": false, 00:24:35.546 "ddgst": false 00:24:35.546 }, 00:24:35.546 "method": "bdev_nvme_attach_controller" 00:24:35.546 },{ 00:24:35.546 "params": { 00:24:35.546 "name": "Nvme8", 00:24:35.546 "trtype": "tcp", 00:24:35.546 "traddr": "10.0.0.2", 00:24:35.546 "adrfam": "ipv4", 00:24:35.546 "trsvcid": "4420", 00:24:35.546 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:35.546 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:35.546 "hdgst": false, 00:24:35.546 "ddgst": false 00:24:35.546 }, 00:24:35.546 "method": "bdev_nvme_attach_controller" 00:24:35.546 },{ 00:24:35.546 "params": { 00:24:35.546 "name": "Nvme9", 00:24:35.546 "trtype": "tcp", 00:24:35.546 "traddr": "10.0.0.2", 00:24:35.546 "adrfam": "ipv4", 00:24:35.546 "trsvcid": "4420", 00:24:35.546 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:35.546 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:35.546 "hdgst": false, 00:24:35.546 "ddgst": false 00:24:35.546 }, 00:24:35.546 "method": "bdev_nvme_attach_controller" 00:24:35.546 },{ 00:24:35.546 "params": { 00:24:35.546 "name": "Nvme10", 00:24:35.546 "trtype": "tcp", 00:24:35.546 "traddr": "10.0.0.2", 00:24:35.546 "adrfam": "ipv4", 00:24:35.546 "trsvcid": "4420", 00:24:35.546 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:35.546 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:35.546 "hdgst": false, 00:24:35.546 "ddgst": false 00:24:35.546 }, 00:24:35.546 "method": "bdev_nvme_attach_controller" 00:24:35.546 }' 00:24:35.546 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.805 [2024-07-15 09:33:22.769951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.805 [2024-07-15 09:33:22.834325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.207 Running I/O for 10 seconds... 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:37.207 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:37.466 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:37.466 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:37.466 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:37.466 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:37.466 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.466 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:37.466 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.725 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:37.725 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:37.725 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:37.986 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:37.986 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:37.986 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:37.986 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:37.986 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.986 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:37.986 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.986 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:37.986 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:37.986 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:24:37.986 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:24:37.986 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:24:37.986 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 779783 00:24:37.986 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 779783 ']' 00:24:37.986 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 779783 00:24:37.986 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:24:37.986 09:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:37.986 09:33:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 779783 00:24:37.986 09:33:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:37.986 09:33:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:37.986 09:33:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 779783' 00:24:37.986 killing process with pid 779783 00:24:37.986 09:33:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 779783 00:24:37.986 09:33:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 779783 00:24:37.986 Received shutdown signal, test time was about 0.945753 seconds 00:24:37.986 00:24:37.986 Latency(us) 00:24:37.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.986 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:37.986 Verification LBA range: start 0x0 length 0x400 00:24:37.986 Nvme1n1 : 0.94 270.94 16.93 0.00 0.00 233348.91 19988.48 241172.48 00:24:37.986 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:37.986 Verification LBA range: start 0x0 length 0x400 00:24:37.986 Nvme2n1 : 0.91 210.73 13.17 0.00 0.00 293187.70 17803.95 246415.36 00:24:37.986 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:37.986 Verification LBA range: start 0x0 length 0x400 00:24:37.986 Nvme3n1 : 0.94 279.08 17.44 0.00 0.00 216579.48 5379.41 258648.75 00:24:37.986 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:37.986 Verification LBA range: start 0x0 length 0x400 00:24:37.986 Nvme4n1 : 0.92 225.48 14.09 0.00 0.00 258279.25 12451.84 253405.87 00:24:37.986 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:37.986 Verification LBA range: start 0x0 length 0x400 00:24:37.986 Nvme5n1 : 0.94 277.42 17.34 0.00 0.00 207985.73 5761.71 210589.01 00:24:37.986 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:37.986 Verification LBA range: start 0x0 length 0x400 00:24:37.986 Nvme6n1 : 0.93 206.99 12.94 0.00 0.00 273149.16 18131.63 262144.00 00:24:37.986 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:37.986 Verification LBA range: start 0x0 length 0x400 00:24:37.986 Nvme7n1 : 0.93 274.80 17.17 0.00 0.00 201036.59 12342.61 253405.87 00:24:37.986 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:37.986 Verification LBA range: start 0x0 length 0x400 00:24:37.986 Nvme8n1 : 0.92 208.27 13.02 0.00 0.00 258331.31 15182.51 237677.23 00:24:37.986 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:37.986 Verification LBA range: start 0x0 length 0x400 00:24:37.986 Nvme9n1 : 0.91 211.50 13.22 0.00 0.00 247083.24 28180.48 242920.11 00:24:37.986 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:37.986 Verification LBA range: start 0x0 length 0x400 00:24:37.986 Nvme10n1 : 0.94 204.61 12.79 0.00 0.00 251205.69 23374.51 281367.89 00:24:37.986 =================================================================================================================== 00:24:37.986 Total : 2369.82 148.11 0.00 0.00 240566.00 5379.41 281367.89 00:24:38.246 09:33:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 779413 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:39.185 rmmod nvme_tcp 00:24:39.185 rmmod nvme_fabrics 00:24:39.185 rmmod nvme_keyring 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 779413 ']' 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 779413 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 779413 ']' 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 779413 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:39.185 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 779413 00:24:39.445 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:39.445 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:39.445 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 779413' 00:24:39.445 killing process with pid 779413 00:24:39.445 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 779413 00:24:39.445 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 779413 00:24:39.445 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:39.445 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:39.445 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:39.445 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:39.445 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:39.445 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.446 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.446 09:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:41.994 00:24:41.994 real 0m7.799s 00:24:41.994 user 0m23.249s 00:24:41.994 sys 0m1.207s 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:41.994 ************************************ 00:24:41.994 END TEST nvmf_shutdown_tc2 00:24:41.994 ************************************ 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:41.994 ************************************ 00:24:41.994 START TEST nvmf_shutdown_tc3 00:24:41.994 ************************************ 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:41.994 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:41.994 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:41.995 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:41.995 Found net devices under 0000:31:00.0: cvl_0_0 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:41.995 Found net devices under 0000:31:00.1: cvl_0_1 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:41.995 09:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:41.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:41.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:24:41.995 00:24:41.995 --- 10.0.0.2 ping statistics --- 00:24:41.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.995 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:41.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:41.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:24:41.995 00:24:41.995 --- 10.0.0.1 ping statistics --- 00:24:41.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.995 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=781241 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 781241 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 781241 ']' 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:41.995 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:42.257 [2024-07-15 09:33:29.208290] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:42.257 [2024-07-15 09:33:29.208339] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.257 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.257 [2024-07-15 09:33:29.295176] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:42.257 [2024-07-15 09:33:29.350342] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.257 [2024-07-15 09:33:29.350376] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.257 [2024-07-15 09:33:29.350382] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.257 [2024-07-15 09:33:29.350386] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.257 [2024-07-15 09:33:29.350390] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.257 [2024-07-15 09:33:29.350500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.257 [2024-07-15 09:33:29.350665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:42.257 [2024-07-15 09:33:29.350800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:42.257 [2024-07-15 09:33:29.351003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.830 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:42.830 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:24:42.830 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:42.830 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:42.830 09:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:42.830 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.830 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:42.830 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.830 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:42.830 [2024-07-15 09:33:30.010110] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.830 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.830 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:42.830 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:42.830 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:42.830 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:42.830 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:42.830 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.830 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.091 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:43.091 Malloc1 00:24:43.091 [2024-07-15 09:33:30.109740] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.091 Malloc2 00:24:43.091 Malloc3 00:24:43.091 Malloc4 00:24:43.091 Malloc5 00:24:43.091 Malloc6 00:24:43.352 Malloc7 00:24:43.352 Malloc8 00:24:43.352 Malloc9 00:24:43.352 Malloc10 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=781548 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 781548 /var/tmp/bdevperf.sock 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 781548 ']' 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:43.353 { 00:24:43.353 "params": { 00:24:43.353 "name": "Nvme$subsystem", 00:24:43.353 "trtype": "$TEST_TRANSPORT", 00:24:43.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.353 "adrfam": "ipv4", 00:24:43.353 "trsvcid": "$NVMF_PORT", 00:24:43.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.353 "hdgst": ${hdgst:-false}, 00:24:43.353 "ddgst": ${ddgst:-false} 00:24:43.353 }, 00:24:43.353 "method": "bdev_nvme_attach_controller" 00:24:43.353 } 00:24:43.353 EOF 00:24:43.353 )") 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:43.353 { 00:24:43.353 "params": { 00:24:43.353 "name": "Nvme$subsystem", 00:24:43.353 "trtype": "$TEST_TRANSPORT", 00:24:43.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.353 "adrfam": "ipv4", 00:24:43.353 "trsvcid": "$NVMF_PORT", 00:24:43.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.353 "hdgst": ${hdgst:-false}, 00:24:43.353 "ddgst": ${ddgst:-false} 00:24:43.353 }, 00:24:43.353 "method": "bdev_nvme_attach_controller" 00:24:43.353 } 00:24:43.353 EOF 00:24:43.353 )") 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:43.353 { 00:24:43.353 "params": { 00:24:43.353 "name": "Nvme$subsystem", 00:24:43.353 "trtype": "$TEST_TRANSPORT", 00:24:43.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.353 "adrfam": "ipv4", 00:24:43.353 "trsvcid": "$NVMF_PORT", 00:24:43.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.353 "hdgst": ${hdgst:-false}, 00:24:43.353 "ddgst": ${ddgst:-false} 00:24:43.353 }, 00:24:43.353 "method": "bdev_nvme_attach_controller" 00:24:43.353 } 00:24:43.353 EOF 00:24:43.353 )") 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:43.353 { 00:24:43.353 "params": { 00:24:43.353 "name": "Nvme$subsystem", 00:24:43.353 "trtype": "$TEST_TRANSPORT", 00:24:43.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.353 "adrfam": "ipv4", 00:24:43.353 "trsvcid": "$NVMF_PORT", 00:24:43.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.353 "hdgst": ${hdgst:-false}, 00:24:43.353 "ddgst": ${ddgst:-false} 00:24:43.353 }, 00:24:43.353 "method": "bdev_nvme_attach_controller" 00:24:43.353 } 00:24:43.353 EOF 00:24:43.353 )") 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:43.353 { 00:24:43.353 "params": { 00:24:43.353 "name": "Nvme$subsystem", 00:24:43.353 "trtype": "$TEST_TRANSPORT", 00:24:43.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.353 "adrfam": "ipv4", 00:24:43.353 "trsvcid": "$NVMF_PORT", 00:24:43.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.353 "hdgst": ${hdgst:-false}, 00:24:43.353 "ddgst": ${ddgst:-false} 00:24:43.353 }, 00:24:43.353 "method": "bdev_nvme_attach_controller" 00:24:43.353 } 00:24:43.353 EOF 00:24:43.353 )") 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:43.353 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:43.615 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:43.615 { 00:24:43.615 "params": { 00:24:43.615 "name": "Nvme$subsystem", 00:24:43.615 "trtype": "$TEST_TRANSPORT", 00:24:43.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.615 "adrfam": "ipv4", 00:24:43.615 "trsvcid": "$NVMF_PORT", 00:24:43.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.615 "hdgst": ${hdgst:-false}, 00:24:43.615 "ddgst": ${ddgst:-false} 00:24:43.615 }, 00:24:43.615 "method": "bdev_nvme_attach_controller" 00:24:43.615 } 00:24:43.615 EOF 00:24:43.615 )") 00:24:43.615 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:43.615 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:43.615 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:43.615 { 00:24:43.615 "params": { 00:24:43.615 "name": "Nvme$subsystem", 00:24:43.615 "trtype": "$TEST_TRANSPORT", 00:24:43.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.615 "adrfam": "ipv4", 00:24:43.615 "trsvcid": "$NVMF_PORT", 00:24:43.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.615 "hdgst": ${hdgst:-false}, 00:24:43.615 "ddgst": ${ddgst:-false} 00:24:43.615 }, 00:24:43.615 "method": "bdev_nvme_attach_controller" 00:24:43.615 } 00:24:43.615 EOF 00:24:43.615 )") 00:24:43.615 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:43.615 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:43.615 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:43.615 { 00:24:43.615 "params": { 00:24:43.615 "name": "Nvme$subsystem", 00:24:43.615 "trtype": "$TEST_TRANSPORT", 00:24:43.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.615 "adrfam": "ipv4", 00:24:43.615 "trsvcid": "$NVMF_PORT", 00:24:43.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.615 "hdgst": ${hdgst:-false}, 00:24:43.615 "ddgst": ${ddgst:-false} 00:24:43.615 }, 00:24:43.615 "method": "bdev_nvme_attach_controller" 00:24:43.615 } 00:24:43.615 EOF 00:24:43.615 )") 00:24:43.615 [2024-07-15 09:33:30.568238] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:43.615 [2024-07-15 09:33:30.568305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781548 ] 00:24:43.615 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:43.615 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:43.615 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:43.615 { 00:24:43.615 "params": { 00:24:43.615 "name": "Nvme$subsystem", 00:24:43.615 "trtype": "$TEST_TRANSPORT", 00:24:43.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.615 "adrfam": "ipv4", 00:24:43.615 "trsvcid": "$NVMF_PORT", 00:24:43.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.616 "hdgst": ${hdgst:-false}, 00:24:43.616 "ddgst": ${ddgst:-false} 00:24:43.616 }, 00:24:43.616 "method": "bdev_nvme_attach_controller" 00:24:43.616 } 00:24:43.616 EOF 00:24:43.616 )") 00:24:43.616 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:43.616 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:43.616 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:43.616 { 00:24:43.616 "params": { 00:24:43.616 "name": "Nvme$subsystem", 00:24:43.616 "trtype": "$TEST_TRANSPORT", 00:24:43.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.616 "adrfam": "ipv4", 00:24:43.616 "trsvcid": "$NVMF_PORT", 00:24:43.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.616 "hdgst": ${hdgst:-false}, 00:24:43.616 "ddgst": ${ddgst:-false} 00:24:43.616 }, 00:24:43.616 "method": "bdev_nvme_attach_controller" 00:24:43.616 } 00:24:43.616 EOF 00:24:43.616 )") 00:24:43.616 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:43.616 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:24:43.616 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:24:43.616 09:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:43.616 "params": { 00:24:43.616 "name": "Nvme1", 00:24:43.616 "trtype": "tcp", 00:24:43.616 "traddr": "10.0.0.2", 00:24:43.616 "adrfam": "ipv4", 00:24:43.616 "trsvcid": "4420", 00:24:43.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:43.616 "hdgst": false, 00:24:43.616 "ddgst": false 00:24:43.616 }, 00:24:43.616 "method": "bdev_nvme_attach_controller" 00:24:43.616 },{ 00:24:43.616 "params": { 00:24:43.616 "name": "Nvme2", 00:24:43.616 "trtype": "tcp", 00:24:43.616 "traddr": "10.0.0.2", 00:24:43.616 "adrfam": "ipv4", 00:24:43.616 "trsvcid": "4420", 00:24:43.616 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:43.616 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:43.616 "hdgst": false, 00:24:43.616 "ddgst": false 00:24:43.616 }, 00:24:43.616 "method": "bdev_nvme_attach_controller" 00:24:43.616 },{ 00:24:43.616 "params": { 00:24:43.616 "name": "Nvme3", 00:24:43.616 "trtype": "tcp", 00:24:43.616 "traddr": "10.0.0.2", 00:24:43.616 "adrfam": "ipv4", 00:24:43.616 "trsvcid": "4420", 00:24:43.616 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:43.616 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:43.616 "hdgst": false, 00:24:43.616 "ddgst": false 00:24:43.616 }, 00:24:43.616 "method": "bdev_nvme_attach_controller" 00:24:43.616 },{ 00:24:43.616 "params": { 00:24:43.616 "name": "Nvme4", 00:24:43.616 "trtype": "tcp", 00:24:43.616 "traddr": "10.0.0.2", 00:24:43.616 "adrfam": "ipv4", 00:24:43.616 "trsvcid": "4420", 00:24:43.616 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:43.616 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:43.616 "hdgst": false, 00:24:43.616 "ddgst": false 00:24:43.616 }, 00:24:43.616 "method": "bdev_nvme_attach_controller" 00:24:43.616 },{ 00:24:43.616 "params": { 00:24:43.616 "name": "Nvme5", 00:24:43.616 "trtype": "tcp", 00:24:43.616 "traddr": "10.0.0.2", 00:24:43.616 "adrfam": "ipv4", 00:24:43.616 "trsvcid": "4420", 00:24:43.616 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:43.616 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:43.616 "hdgst": false, 00:24:43.616 "ddgst": false 00:24:43.616 }, 00:24:43.616 "method": "bdev_nvme_attach_controller" 00:24:43.616 },{ 00:24:43.616 "params": { 00:24:43.616 "name": "Nvme6", 00:24:43.616 "trtype": "tcp", 00:24:43.616 "traddr": "10.0.0.2", 00:24:43.616 "adrfam": "ipv4", 00:24:43.616 "trsvcid": "4420", 00:24:43.616 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:43.616 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:43.616 "hdgst": false, 00:24:43.616 "ddgst": false 00:24:43.616 }, 00:24:43.616 "method": "bdev_nvme_attach_controller" 00:24:43.616 },{ 00:24:43.616 "params": { 00:24:43.616 "name": "Nvme7", 00:24:43.616 "trtype": "tcp", 00:24:43.616 "traddr": "10.0.0.2", 00:24:43.616 "adrfam": "ipv4", 00:24:43.616 "trsvcid": "4420", 00:24:43.616 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:43.616 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:43.616 "hdgst": false, 00:24:43.616 "ddgst": false 00:24:43.616 }, 00:24:43.616 "method": "bdev_nvme_attach_controller" 00:24:43.616 },{ 00:24:43.616 "params": { 00:24:43.616 "name": "Nvme8", 00:24:43.616 "trtype": "tcp", 00:24:43.616 "traddr": "10.0.0.2", 00:24:43.616 "adrfam": "ipv4", 00:24:43.616 "trsvcid": "4420", 00:24:43.616 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:43.616 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:43.616 "hdgst": false, 00:24:43.616 "ddgst": false 00:24:43.616 }, 00:24:43.616 "method": "bdev_nvme_attach_controller" 00:24:43.616 },{ 00:24:43.616 "params": { 00:24:43.616 "name": "Nvme9", 00:24:43.616 "trtype": "tcp", 00:24:43.616 "traddr": "10.0.0.2", 00:24:43.616 "adrfam": "ipv4", 00:24:43.616 "trsvcid": "4420", 00:24:43.616 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:43.616 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:43.616 "hdgst": false, 00:24:43.616 "ddgst": false 00:24:43.616 }, 00:24:43.616 "method": "bdev_nvme_attach_controller" 00:24:43.616 },{ 00:24:43.616 "params": { 00:24:43.616 "name": "Nvme10", 00:24:43.616 "trtype": "tcp", 00:24:43.616 "traddr": "10.0.0.2", 00:24:43.616 "adrfam": "ipv4", 00:24:43.616 "trsvcid": "4420", 00:24:43.616 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:43.616 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:43.616 "hdgst": false, 00:24:43.616 "ddgst": false 00:24:43.616 }, 00:24:43.616 "method": "bdev_nvme_attach_controller" 00:24:43.616 }' 00:24:43.616 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.616 [2024-07-15 09:33:30.635775] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.616 [2024-07-15 09:33:30.700530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.003 Running I/O for 10 seconds... 00:24:45.003 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:45.003 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:24:45.003 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:45.003 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.003 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:45.265 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.265 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:45.265 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:45.265 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:45.265 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:45.265 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:24:45.265 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:24:45.265 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:45.265 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:45.265 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:45.265 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:45.265 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.265 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:45.265 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.265 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:45.265 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:45.265 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:45.526 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:45.526 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:45.526 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:45.526 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:45.526 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.526 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:45.526 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.526 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:45.526 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:45.526 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:45.787 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:45.787 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:45.787 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:45.787 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:45.787 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.787 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:45.787 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.788 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=136 00:24:45.788 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 136 -ge 100 ']' 00:24:45.788 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:24:45.788 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:24:45.788 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:24:45.788 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 781241 00:24:45.788 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 781241 ']' 00:24:45.788 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 781241 00:24:45.788 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:24:45.788 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:45.788 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 781241 00:24:45.788 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:45.788 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:45.788 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 781241' 00:24:45.788 killing process with pid 781241 00:24:45.788 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 781241 00:24:45.788 09:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 781241 00:24:46.113 [2024-07-15 09:33:32.989691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.113 [2024-07-15 09:33:32.989765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.113 [2024-07-15 09:33:32.989772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.113 [2024-07-15 09:33:32.989777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.113 [2024-07-15 09:33:32.989781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.113 [2024-07-15 09:33:32.989786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.113 [2024-07-15 09:33:32.989790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.113 [2024-07-15 09:33:32.989795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.113 [2024-07-15 09:33:32.989800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.113 [2024-07-15 09:33:32.989804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.113 [2024-07-15 09:33:32.989809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.113 [2024-07-15 09:33:32.989813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.113 [2024-07-15 09:33:32.989818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.113 [2024-07-15 09:33:32.989827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.113 [2024-07-15 09:33:32.989832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.113 [2024-07-15 09:33:32.989837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.113 [2024-07-15 09:33:32.989841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.113 [2024-07-15 09:33:32.989846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.989999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.990003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.990008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.990012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.990017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.990021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.990026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.990030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.990034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.990039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.990044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.990048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.990052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e16f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.990855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.114 [2024-07-15 09:33:32.990892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.114 [2024-07-15 09:33:32.990902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.114 [2024-07-15 09:33:32.990910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.114 [2024-07-15 09:33:32.990922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.114 [2024-07-15 09:33:32.990929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.114 [2024-07-15 09:33:32.990937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.114 [2024-07-15 09:33:32.990944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.114 [2024-07-15 09:33:32.990952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e95d0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.114 [2024-07-15 09:33:32.991620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.991717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e40f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.994997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.995001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.995005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.995010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.995014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.995018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.995023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.995027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.995031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.995036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.995040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.995044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.995049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.995053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:32.995057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1b90 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.115 [2024-07-15 09:33:33.000589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.000747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e24f0 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.001533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2990 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.002127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.002140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.002145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.002149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.002154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.002158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.002163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.002167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.002172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.002177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.116 [2024-07-15 09:33:33.002181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.117 [2024-07-15 09:33:33.002330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.118 [2024-07-15 09:33:33.002335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.002339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.002344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.002348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.002352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.002356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.002361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.002365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.002370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.002374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.002378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.002383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.002388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.002393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2e50 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.119 [2024-07-15 09:33:33.003315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.003320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.003324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.003328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.003333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e32f0 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.004335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3c50 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.010583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.120 [2024-07-15 09:33:33.010606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.120 [2024-07-15 09:33:33.010614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.120 [2024-07-15 09:33:33.010621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.120 [2024-07-15 09:33:33.010629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.120 [2024-07-15 09:33:33.010636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.120 [2024-07-15 09:33:33.010644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.120 [2024-07-15 09:33:33.010651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.120 [2024-07-15 09:33:33.010659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbcc10 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.010683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.120 [2024-07-15 09:33:33.010691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.120 [2024-07-15 09:33:33.010699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.120 [2024-07-15 09:33:33.010706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.120 [2024-07-15 09:33:33.010714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.120 [2024-07-15 09:33:33.010721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.120 [2024-07-15 09:33:33.010728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.120 [2024-07-15 09:33:33.010735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.120 [2024-07-15 09:33:33.010742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2bbc0 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.010770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.120 [2024-07-15 09:33:33.010782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.120 [2024-07-15 09:33:33.010790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.120 [2024-07-15 09:33:33.010797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.120 [2024-07-15 09:33:33.010804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.120 [2024-07-15 09:33:33.010811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.120 [2024-07-15 09:33:33.010819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.120 [2024-07-15 09:33:33.010826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.120 [2024-07-15 09:33:33.010833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b9d0 is same with the state(5) to be set 00:24:46.120 [2024-07-15 09:33:33.010856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.010865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.010872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.010879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.010887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.010894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.010902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.010909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.010916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb4000 is same with the state(5) to be set 00:24:46.121 [2024-07-15 09:33:33.010940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.010948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.010956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.010963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.010971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.010978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.010986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.010992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.010999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87cb0 is same with the state(5) to be set 00:24:46.121 [2024-07-15 09:33:33.011025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.011033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.011047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.011062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.011077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25970 is same with the state(5) to be set 00:24:46.121 [2024-07-15 09:33:33.011100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e95d0 (9): Bad file descriptor 00:24:46.121 [2024-07-15 09:33:33.011125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.011133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.011148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.011162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.011177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb4fd0 is same with the state(5) to be set 00:24:46.121 [2024-07-15 09:33:33.011204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.011213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.011228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.011243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.121 [2024-07-15 09:33:33.011259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ec610 is same with the state(5) to be set 00:24:46.121 [2024-07-15 09:33:33.011362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.121 [2024-07-15 09:33:33.011763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.121 [2024-07-15 09:33:33.011772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.011779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.011788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.011795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.011804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.011811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.011820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.011827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.011837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.011844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.011853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.011860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.011869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.011876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.011885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.011892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.011901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.011908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.011917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.011924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.011933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.011940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.011949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.011956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.011966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.011974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.011983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.011994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012468] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb3e230 was disconnected and freed. reset controller. 00:24:46.122 [2024-07-15 09:33:33.012576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.122 [2024-07-15 09:33:33.012657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.122 [2024-07-15 09:33:33.012666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.012682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.012699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.012715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.012734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.012756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.012773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.012789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.012806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.012822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.012838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.012855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.012871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.012888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.012904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.012920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.012936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.012954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.012970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.012986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.012993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.013002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.013009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.013018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.013025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.013034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.013041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.013050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.013058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.013067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.013074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.013083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.013090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.013099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.013106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.013115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.013122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.013131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.013138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.013147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.013156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.123 [2024-07-15 09:33:33.013165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.123 [2024-07-15 09:33:33.013172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.013183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.013190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.013199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.013206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.013215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.019969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.019976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020049] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9e5a40 was disconnected and freed. reset controller. 00:24:46.124 [2024-07-15 09:33:33.020200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.124 [2024-07-15 09:33:33.020535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.124 [2024-07-15 09:33:33.020545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.020987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.020994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.021003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.021010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.021019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.021026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.021035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.021042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.021051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.021058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.021067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.021074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.021083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.021091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.021100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.021107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.021116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.021123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.021132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.021138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.021147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.021154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.021164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.021172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.021182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.021189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.021198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.021205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.021214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.021221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.021230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.125 [2024-07-15 09:33:33.021237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.125 [2024-07-15 09:33:33.021246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.021253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.021269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021316] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb417c0 was disconnected and freed. reset controller. 00:24:46.126 [2024-07-15 09:33:33.021471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.126 [2024-07-15 09:33:33.021485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.126 [2024-07-15 09:33:33.021502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.126 [2024-07-15 09:33:33.021517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.126 [2024-07-15 09:33:33.021534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9de20 is same with the state(5) to be set 00:24:46.126 [2024-07-15 09:33:33.021563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbcc10 (9): Bad file descriptor 00:24:46.126 [2024-07-15 09:33:33.021576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2bbc0 (9): Bad file descriptor 00:24:46.126 [2024-07-15 09:33:33.021591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0b9d0 (9): Bad file descriptor 00:24:46.126 [2024-07-15 09:33:33.021606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb4000 (9): Bad file descriptor 00:24:46.126 [2024-07-15 09:33:33.021618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87cb0 (9): Bad file descriptor 00:24:46.126 [2024-07-15 09:33:33.021630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa25970 (9): Bad file descriptor 00:24:46.126 [2024-07-15 09:33:33.021650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb4fd0 (9): Bad file descriptor 00:24:46.126 [2024-07-15 09:33:33.021665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4ec610 (9): Bad file descriptor 00:24:46.126 [2024-07-15 09:33:33.021692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.021701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.021731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.021750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.021778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.021797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.021819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.021839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.021859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.021879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.021898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.021921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.021941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.021961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.021981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.021993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.126 [2024-07-15 09:33:33.022510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.126 [2024-07-15 09:33:33.022517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.022978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.022985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.023883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa1420 is same with the state(5) to be set 00:24:46.127 [2024-07-15 09:33:33.023925] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xaa1420 was disconnected and freed. reset controller. 00:24:46.127 [2024-07-15 09:33:33.023934] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:46.127 [2024-07-15 09:33:33.027767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:46.127 [2024-07-15 09:33:33.028093] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:46.127 [2024-07-15 09:33:33.028122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:46.127 [2024-07-15 09:33:33.028134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.127 [2024-07-15 09:33:33.028406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.127 [2024-07-15 09:33:33.028420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbbcc10 with addr=10.0.0.2, port=4420 00:24:46.127 [2024-07-15 09:33:33.028428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbcc10 is same with the state(5) to be set 00:24:46.127 [2024-07-15 09:33:33.029036] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:46.127 [2024-07-15 09:33:33.029078] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:46.127 [2024-07-15 09:33:33.029368] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:46.127 [2024-07-15 09:33:33.029684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:46.127 [2024-07-15 09:33:33.030116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.127 [2024-07-15 09:33:33.030155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4ec610 with addr=10.0.0.2, port=4420 00:24:46.127 [2024-07-15 09:33:33.030166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ec610 is same with the state(5) to be set 00:24:46.127 [2024-07-15 09:33:33.030573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.127 [2024-07-15 09:33:33.030584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e95d0 with addr=10.0.0.2, port=4420 00:24:46.127 [2024-07-15 09:33:33.030591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e95d0 is same with the state(5) to be set 00:24:46.127 [2024-07-15 09:33:33.030604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbcc10 (9): Bad file descriptor 00:24:46.127 [2024-07-15 09:33:33.030669] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:46.127 [2024-07-15 09:33:33.030761] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:46.127 [2024-07-15 09:33:33.031035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.127 [2024-07-15 09:33:33.031048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87cb0 with addr=10.0.0.2, port=4420 00:24:46.127 [2024-07-15 09:33:33.031055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87cb0 is same with the state(5) to be set 00:24:46.127 [2024-07-15 09:33:33.031064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4ec610 (9): Bad file descriptor 00:24:46.127 [2024-07-15 09:33:33.031074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e95d0 (9): Bad file descriptor 00:24:46.127 [2024-07-15 09:33:33.031082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:46.127 [2024-07-15 09:33:33.031088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:46.127 [2024-07-15 09:33:33.031097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:46.127 [2024-07-15 09:33:33.031169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.127 [2024-07-15 09:33:33.031179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87cb0 (9): Bad file descriptor 00:24:46.127 [2024-07-15 09:33:33.031187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:46.127 [2024-07-15 09:33:33.031193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:46.127 [2024-07-15 09:33:33.031199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:46.127 [2024-07-15 09:33:33.031211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.127 [2024-07-15 09:33:33.031217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.127 [2024-07-15 09:33:33.031224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.127 [2024-07-15 09:33:33.031260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.127 [2024-07-15 09:33:33.031267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.127 [2024-07-15 09:33:33.031273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:46.127 [2024-07-15 09:33:33.031279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:46.127 [2024-07-15 09:33:33.031285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:46.127 [2024-07-15 09:33:33.031322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.127 [2024-07-15 09:33:33.031463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9de20 (9): Bad file descriptor 00:24:46.127 [2024-07-15 09:33:33.031590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.031602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.031617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.031625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.031635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.031646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.031656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.031663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.031672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.127 [2024-07-15 09:33:33.031679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.127 [2024-07-15 09:33:33.031689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.031696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.031706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.031713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.031722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.031729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.031738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.031745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.031760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.031768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.031777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.031784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.031793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.031800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.031809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.031817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.031826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.031833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.031843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.031850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.031859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.031867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.031877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.031884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.031893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.031900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.031909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.031916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.031926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.031932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.031942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.031949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.031959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.031966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.031975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.031983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.031993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.128 [2024-07-15 09:33:33.032541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.128 [2024-07-15 09:33:33.032550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.032558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.032566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.032574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.032583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.032590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.032599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.032606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.032616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.032623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.032632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.032639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.032648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.032655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.032663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3ce60 is same with the state(5) to be set 00:24:46.129 [2024-07-15 09:33:33.033954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.033968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.033984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.033993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.129 [2024-07-15 09:33:33.034741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.129 [2024-07-15 09:33:33.034758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.034766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.034775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.034782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.034791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.034798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.034808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.034815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.034824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.034831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.034842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.034849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.034858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.034866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.034875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.034882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.034892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.034899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.034909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.034916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.034925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.034932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.034942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.034949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.034958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.034965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.034974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.034982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.034991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.034998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.035007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.035014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.035024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.035031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.035040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.035049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.035057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e3d90 is same with the state(5) to be set 00:24:46.130 [2024-07-15 09:33:33.036316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.036329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.036340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.036349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.036359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.036368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.036378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.036386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.036397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.036405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.036415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.036424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.036434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.036442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.036453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.036460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.036469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.036475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.036485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.036492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.036501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.036508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.036517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.036529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.036538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.036546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.036555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.036562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.036571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.036578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.036587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.036594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.036603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.036610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.036620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.036627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.036636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.130 [2024-07-15 09:33:33.036643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.130 [2024-07-15 09:33:33.036652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.036983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.036991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.037389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.037397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5220 is same with the state(5) to be set 00:24:46.131 [2024-07-15 09:33:33.038667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.038680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.038693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.038702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.038713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.038722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.038733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.038742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.038757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.038766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.038776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.038783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.038792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.038799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.131 [2024-07-15 09:33:33.038808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.131 [2024-07-15 09:33:33.038815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.038824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.038831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.038840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.038848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.038859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.038867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.038876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.038883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.038892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.038899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.038908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.038915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.038925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.038932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.038942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.038948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.038958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.038965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.038974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.038981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.038990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.038997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.132 [2024-07-15 09:33:33.039651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.132 [2024-07-15 09:33:33.039660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.039667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.039678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.039685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.039694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.039701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.039710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.039717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.039727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.039734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.039742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb40310 is same with the state(5) to be set 00:24:46.133 [2024-07-15 09:33:33.041011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.133 [2024-07-15 09:33:33.041590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.133 [2024-07-15 09:33:33.041599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.134 [2024-07-15 09:33:33.041606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.134 [2024-07-15 09:33:33.041615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.134 [2024-07-15 09:33:33.041622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.134 [2024-07-15 09:33:33.041632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.134 [2024-07-15 09:33:33.041639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.134 [2024-07-15 09:33:33.041648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.134 [2024-07-15 09:33:33.041655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.134 [2024-07-15 09:33:33.041664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.134 [2024-07-15 09:33:33.041671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.134 [2024-07-15 09:33:33.041680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.134 [2024-07-15 09:33:33.041687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.134 [2024-07-15 09:33:33.041696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.134 [2024-07-15 09:33:33.041703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.134 [2024-07-15 09:33:33.041712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.134 [2024-07-15 09:33:33.041720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.134 [2024-07-15 09:33:33.041729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.134 [2024-07-15 09:33:33.041736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.134 [2024-07-15 09:33:33.041745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.134 [2024-07-15 09:33:33.041759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.134 [2024-07-15 09:33:33.041769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.134 [2024-07-15 09:33:33.041778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.134 [2024-07-15 09:33:33.041787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.134 [2024-07-15 09:33:33.041794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.134 [2024-07-15 09:33:33.041803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.134 [2024-07-15 09:33:33.041810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.134 [2024-07-15 09:33:33.041819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.134 [2024-07-15 09:33:33.041826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.134 [2024-07-15 09:33:33.041836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.134 [2024-07-15 09:33:33.041842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.041852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.041859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.041869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.041875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.041885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.041893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.041902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.041909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.041918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.041925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.041935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.041943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.041952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.041959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.041969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.041976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.041987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.041994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.042004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.042011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.042021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.042028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.042037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.042044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.042054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.042061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.042070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.042077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.042085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb43f70 is same with the state(5) to be set 00:24:46.135 [2024-07-15 09:33:33.043919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:46.135 [2024-07-15 09:33:33.043942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:46.135 [2024-07-15 09:33:33.043952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:46.135 [2024-07-15 09:33:33.043961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:46.135 [2024-07-15 09:33:33.044044] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:46.135 [2024-07-15 09:33:33.044118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:46.135 [2024-07-15 09:33:33.044507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.135 [2024-07-15 09:33:33.044520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb4fd0 with addr=10.0.0.2, port=4420 00:24:46.135 [2024-07-15 09:33:33.044528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb4fd0 is same with the state(5) to be set 00:24:46.135 [2024-07-15 09:33:33.044971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.135 [2024-07-15 09:33:33.045010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa25970 with addr=10.0.0.2, port=4420 00:24:46.135 [2024-07-15 09:33:33.045021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25970 is same with the state(5) to be set 00:24:46.135 [2024-07-15 09:33:33.045364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.135 [2024-07-15 09:33:33.045375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2bbc0 with addr=10.0.0.2, port=4420 00:24:46.135 [2024-07-15 09:33:33.045383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2bbc0 is same with the state(5) to be set 00:24:46.135 [2024-07-15 09:33:33.045575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.135 [2024-07-15 09:33:33.045584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0b9d0 with addr=10.0.0.2, port=4420 00:24:46.135 [2024-07-15 09:33:33.045591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b9d0 is same with the state(5) to be set 00:24:46.135 [2024-07-15 09:33:33.046661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.046675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.046690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.046698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.046707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.046714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.046724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.046731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.046741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.046748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.046765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.046772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.046782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.046789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.046798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.046805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.046814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.046821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.135 [2024-07-15 09:33:33.046830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.135 [2024-07-15 09:33:33.046837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.046847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.046854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.046863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.046873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.046883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.046890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.046899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.046906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.046915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.046922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.046932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.046939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.046948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.046955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.046965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.046971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.046981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.046987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.046997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.136 [2024-07-15 09:33:33.047397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.136 [2024-07-15 09:33:33.047403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.137 [2024-07-15 09:33:33.047737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.137 [2024-07-15 09:33:33.047745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb42a70 is same with the state(5) to be set 00:24:46.137 [2024-07-15 09:33:33.049486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:46.137 [2024-07-15 09:33:33.049509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.137 [2024-07-15 09:33:33.049518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:46.137 [2024-07-15 09:33:33.049527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:46.137 task offset: 32256 on job bdev=Nvme1n1 fails 00:24:46.137 00:24:46.137 Latency(us) 00:24:46.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.137 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.137 Job: Nvme1n1 ended in about 0.93 seconds with error 00:24:46.137 Verification LBA range: start 0x0 length 0x400 00:24:46.137 Nvme1n1 : 0.93 206.22 12.89 68.74 0.00 229944.53 14636.37 221074.77 00:24:46.137 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.137 Job: Nvme2n1 ended in about 0.94 seconds with error 00:24:46.137 Verification LBA range: start 0x0 length 0x400 00:24:46.137 Nvme2n1 : 0.94 136.03 8.50 68.01 0.00 303790.93 21189.97 248162.99 00:24:46.137 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.137 Job: Nvme3n1 ended in about 0.93 seconds with error 00:24:46.137 Verification LBA range: start 0x0 length 0x400 00:24:46.137 Nvme3n1 : 0.93 205.95 12.87 68.65 0.00 220956.59 14090.24 241172.48 00:24:46.137 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.137 Job: Nvme4n1 ended in about 0.94 seconds with error 00:24:46.137 Verification LBA range: start 0x0 length 0x400 00:24:46.137 Nvme4n1 : 0.94 207.77 12.99 67.84 0.00 215553.37 14417.92 244667.73 00:24:46.137 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.137 Job: Nvme5n1 ended in about 0.95 seconds with error 00:24:46.137 Verification LBA range: start 0x0 length 0x400 00:24:46.137 Nvme5n1 : 0.95 135.35 8.46 67.67 0.00 286422.76 19005.44 249910.61 00:24:46.137 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.137 Job: Nvme6n1 ended in about 0.93 seconds with error 00:24:46.137 Verification LBA range: start 0x0 length 0x400 00:24:46.137 Nvme6n1 : 0.93 205.66 12.85 68.55 0.00 207029.97 15291.73 227191.47 00:24:46.137 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.137 Job: Nvme7n1 ended in about 0.95 seconds with error 00:24:46.137 Verification LBA range: start 0x0 length 0x400 00:24:46.137 Nvme7n1 : 0.95 206.74 12.92 67.51 0.00 202737.93 9611.95 246415.36 00:24:46.137 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.137 Job: Nvme8n1 ended in about 0.93 seconds with error 00:24:46.137 Verification LBA range: start 0x0 length 0x400 00:24:46.138 Nvme8n1 : 0.93 205.39 12.84 68.46 0.00 197848.75 16274.77 260396.37 00:24:46.138 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.138 Job: Nvme9n1 ended in about 0.96 seconds with error 00:24:46.138 Verification LBA range: start 0x0 length 0x400 00:24:46.138 Nvme9n1 : 0.96 133.89 8.37 66.94 0.00 264644.27 27962.03 277872.64 00:24:46.138 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.138 Job: Nvme10n1 ended in about 0.95 seconds with error 00:24:46.138 Verification LBA range: start 0x0 length 0x400 00:24:46.138 Nvme10n1 : 0.95 134.68 8.42 67.34 0.00 256556.94 21299.20 274377.39 00:24:46.138 =================================================================================================================== 00:24:46.138 Total : 1777.67 111.10 679.73 0.00 234094.74 9611.95 277872.64 00:24:46.138 [2024-07-15 09:33:33.075278] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:46.138 [2024-07-15 09:33:33.075321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:46.138 [2024-07-15 09:33:33.075612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.138 [2024-07-15 09:33:33.075630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb4000 with addr=10.0.0.2, port=4420 00:24:46.138 [2024-07-15 09:33:33.075640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb4000 is same with the state(5) to be set 00:24:46.138 [2024-07-15 09:33:33.075655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb4fd0 (9): Bad file descriptor 00:24:46.138 [2024-07-15 09:33:33.075667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa25970 (9): Bad file descriptor 00:24:46.138 [2024-07-15 09:33:33.075676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2bbc0 (9): Bad file descriptor 00:24:46.138 [2024-07-15 09:33:33.075687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0b9d0 (9): Bad file descriptor 00:24:46.138 [2024-07-15 09:33:33.076028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.138 [2024-07-15 09:33:33.076042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbbcc10 with addr=10.0.0.2, port=4420 00:24:46.138 [2024-07-15 09:33:33.076050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbcc10 is same with the state(5) to be set 00:24:46.138 [2024-07-15 09:33:33.076429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.138 [2024-07-15 09:33:33.076438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e95d0 with addr=10.0.0.2, port=4420 00:24:46.138 [2024-07-15 09:33:33.076446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e95d0 is same with the state(5) to be set 00:24:46.138 [2024-07-15 09:33:33.076637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.138 [2024-07-15 09:33:33.076646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4ec610 with addr=10.0.0.2, port=4420 00:24:46.138 [2024-07-15 09:33:33.076654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ec610 is same with the state(5) to be set 00:24:46.138 [2024-07-15 09:33:33.076994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.138 [2024-07-15 09:33:33.077004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87cb0 with addr=10.0.0.2, port=4420 00:24:46.138 [2024-07-15 09:33:33.077012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87cb0 is same with the state(5) to be set 00:24:46.138 [2024-07-15 09:33:33.077338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.138 [2024-07-15 09:33:33.077347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9de20 with addr=10.0.0.2, port=4420 00:24:46.138 [2024-07-15 09:33:33.077354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9de20 is same with the state(5) to be set 00:24:46.138 [2024-07-15 09:33:33.077368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb4000 (9): Bad file descriptor 00:24:46.138 [2024-07-15 09:33:33.077377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:46.138 [2024-07-15 09:33:33.077384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:46.138 [2024-07-15 09:33:33.077392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:46.138 [2024-07-15 09:33:33.077405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:46.138 [2024-07-15 09:33:33.077412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:46.138 [2024-07-15 09:33:33.077418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:46.138 [2024-07-15 09:33:33.077429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:46.138 [2024-07-15 09:33:33.077435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:46.138 [2024-07-15 09:33:33.077442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:46.138 [2024-07-15 09:33:33.077452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:46.138 [2024-07-15 09:33:33.077458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:46.138 [2024-07-15 09:33:33.077465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:46.138 [2024-07-15 09:33:33.077493] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:46.138 [2024-07-15 09:33:33.077505] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:46.138 [2024-07-15 09:33:33.077515] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:46.138 [2024-07-15 09:33:33.077527] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:46.138 [2024-07-15 09:33:33.077537] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:46.138 [2024-07-15 09:33:33.077869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.138 [2024-07-15 09:33:33.077880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.138 [2024-07-15 09:33:33.077886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.138 [2024-07-15 09:33:33.077892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.138 [2024-07-15 09:33:33.077900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbcc10 (9): Bad file descriptor 00:24:46.138 [2024-07-15 09:33:33.077909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e95d0 (9): Bad file descriptor 00:24:46.138 [2024-07-15 09:33:33.077918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4ec610 (9): Bad file descriptor 00:24:46.138 [2024-07-15 09:33:33.077927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87cb0 (9): Bad file descriptor 00:24:46.138 [2024-07-15 09:33:33.077936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9de20 (9): Bad file descriptor 00:24:46.138 [2024-07-15 09:33:33.077943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:46.138 [2024-07-15 09:33:33.077950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:46.138 [2024-07-15 09:33:33.077956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:46.138 [2024-07-15 09:33:33.078201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.138 [2024-07-15 09:33:33.078212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:46.138 [2024-07-15 09:33:33.078218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:46.138 [2024-07-15 09:33:33.078226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:46.138 [2024-07-15 09:33:33.078235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.138 [2024-07-15 09:33:33.078241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.138 [2024-07-15 09:33:33.078248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.138 [2024-07-15 09:33:33.078257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:46.138 [2024-07-15 09:33:33.078263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:46.138 [2024-07-15 09:33:33.078269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:46.138 [2024-07-15 09:33:33.078280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:46.138 [2024-07-15 09:33:33.078285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:46.138 [2024-07-15 09:33:33.078292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:46.138 [2024-07-15 09:33:33.078301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:46.138 [2024-07-15 09:33:33.078307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:46.138 [2024-07-15 09:33:33.078314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:46.138 [2024-07-15 09:33:33.078345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.138 [2024-07-15 09:33:33.078352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.138 [2024-07-15 09:33:33.078358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.138 [2024-07-15 09:33:33.078365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.138 [2024-07-15 09:33:33.078371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.139 09:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:24:46.139 09:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:24:47.083 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 781548 00:24:47.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (781548) - No such process 00:24:47.083 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:24:47.083 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:24:47.083 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:47.083 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:47.083 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:47.083 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:47.083 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:47.083 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:24:47.342 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:47.342 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:24:47.342 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:47.342 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:47.342 rmmod nvme_tcp 00:24:47.342 rmmod nvme_fabrics 00:24:47.342 rmmod nvme_keyring 00:24:47.342 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:47.342 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:24:47.342 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:24:47.342 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:47.342 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:47.342 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:47.342 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:47.342 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:47.342 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:47.343 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.343 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:47.343 09:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.255 09:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:49.255 00:24:49.255 real 0m7.636s 00:24:49.255 user 0m18.367s 00:24:49.255 sys 0m1.170s 00:24:49.255 09:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:49.255 09:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:49.255 ************************************ 00:24:49.255 END TEST nvmf_shutdown_tc3 00:24:49.255 ************************************ 00:24:49.255 09:33:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:24:49.255 09:33:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:24:49.255 00:24:49.255 real 0m33.129s 00:24:49.255 user 1m15.819s 00:24:49.255 sys 0m9.766s 00:24:49.255 09:33:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:49.255 09:33:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:49.255 ************************************ 00:24:49.255 END TEST nvmf_shutdown 00:24:49.255 ************************************ 00:24:49.516 09:33:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:49.516 09:33:36 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:24:49.516 09:33:36 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:49.516 09:33:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:49.516 09:33:36 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:24:49.516 09:33:36 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:49.516 09:33:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:49.516 09:33:36 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:24:49.516 09:33:36 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:49.516 09:33:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:49.516 09:33:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:49.516 09:33:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:49.516 ************************************ 00:24:49.516 START TEST nvmf_multicontroller 00:24:49.516 ************************************ 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:49.516 * Looking for test storage... 00:24:49.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:49.516 09:33:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.776 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:49.776 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:49.776 09:33:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:24:49.776 09:33:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:57.922 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:57.923 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:57.923 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:57.923 Found net devices under 0000:31:00.0: cvl_0_0 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:57.923 Found net devices under 0000:31:00.1: cvl_0_1 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:57.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:24:57.923 00:24:57.923 --- 10.0.0.2 ping statistics --- 00:24:57.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.923 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:24:57.923 00:24:57.923 --- 10.0.0.1 ping statistics --- 00:24:57.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.923 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=787034 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 787034 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 787034 ']' 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:57.923 09:33:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:57.923 [2024-07-15 09:33:45.043182] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:57.923 [2024-07-15 09:33:45.043229] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.923 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.185 [2024-07-15 09:33:45.134789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:58.185 [2024-07-15 09:33:45.216071] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.185 [2024-07-15 09:33:45.216124] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.185 [2024-07-15 09:33:45.216133] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.185 [2024-07-15 09:33:45.216140] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.185 [2024-07-15 09:33:45.216146] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.185 [2024-07-15 09:33:45.216278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.185 [2024-07-15 09:33:45.216448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.185 [2024-07-15 09:33:45.216448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:58.758 [2024-07-15 09:33:45.861964] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:58.758 Malloc0 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:58.758 [2024-07-15 09:33:45.925986] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:58.758 [2024-07-15 09:33:45.937929] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:58.758 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.759 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:59.020 Malloc1 00:24:59.020 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.020 09:33:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:59.020 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.020 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:59.020 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.020 09:33:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:59.020 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.020 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:59.020 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.020 09:33:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:59.020 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.020 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:59.020 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.020 09:33:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:59.020 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.020 09:33:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:59.020 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.020 09:33:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=787164 00:24:59.020 09:33:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:59.020 09:33:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:59.020 09:33:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 787164 /var/tmp/bdevperf.sock 00:24:59.020 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 787164 ']' 00:24:59.020 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:59.020 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:59.020 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:59.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:59.020 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:59.020 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:59.965 NVMe0n1 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.965 1 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:59.965 request: 00:24:59.965 { 00:24:59.965 "name": "NVMe0", 00:24:59.965 "trtype": "tcp", 00:24:59.965 "traddr": "10.0.0.2", 00:24:59.965 "adrfam": "ipv4", 00:24:59.965 "trsvcid": "4420", 00:24:59.965 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.965 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:59.965 "hostaddr": "10.0.0.2", 00:24:59.965 "hostsvcid": "60000", 00:24:59.965 "prchk_reftag": false, 00:24:59.965 "prchk_guard": false, 00:24:59.965 "hdgst": false, 00:24:59.965 "ddgst": false, 00:24:59.965 "method": "bdev_nvme_attach_controller", 00:24:59.965 "req_id": 1 00:24:59.965 } 00:24:59.965 Got JSON-RPC error response 00:24:59.965 response: 00:24:59.965 { 00:24:59.965 "code": -114, 00:24:59.965 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:59.965 } 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.965 09:33:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:59.965 request: 00:24:59.965 { 00:24:59.965 "name": "NVMe0", 00:24:59.965 "trtype": "tcp", 00:24:59.965 "traddr": "10.0.0.2", 00:24:59.965 "adrfam": "ipv4", 00:24:59.965 "trsvcid": "4420", 00:24:59.965 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:59.965 "hostaddr": "10.0.0.2", 00:24:59.965 "hostsvcid": "60000", 00:24:59.965 "prchk_reftag": false, 00:24:59.965 "prchk_guard": false, 00:24:59.965 "hdgst": false, 00:24:59.965 "ddgst": false, 00:24:59.965 "method": "bdev_nvme_attach_controller", 00:24:59.965 "req_id": 1 00:24:59.965 } 00:24:59.965 Got JSON-RPC error response 00:24:59.965 response: 00:24:59.965 { 00:24:59.965 "code": -114, 00:24:59.965 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:59.965 } 00:24:59.965 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:59.965 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:59.965 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:59.965 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:59.965 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:59.965 09:33:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:59.965 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:59.965 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:59.965 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:59.965 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:59.965 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:59.965 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:59.965 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:59.965 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.965 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:59.965 request: 00:24:59.965 { 00:24:59.965 "name": "NVMe0", 00:24:59.965 "trtype": "tcp", 00:24:59.965 "traddr": "10.0.0.2", 00:24:59.965 "adrfam": "ipv4", 00:24:59.965 "trsvcid": "4420", 00:24:59.965 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.965 "hostaddr": "10.0.0.2", 00:24:59.965 "hostsvcid": "60000", 00:24:59.965 "prchk_reftag": false, 00:24:59.965 "prchk_guard": false, 00:24:59.965 "hdgst": false, 00:24:59.965 "ddgst": false, 00:24:59.965 "multipath": "disable", 00:24:59.965 "method": "bdev_nvme_attach_controller", 00:24:59.965 "req_id": 1 00:24:59.965 } 00:24:59.965 Got JSON-RPC error response 00:24:59.965 response: 00:24:59.965 { 00:24:59.966 "code": -114, 00:24:59.966 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:59.966 } 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:59.966 request: 00:24:59.966 { 00:24:59.966 "name": "NVMe0", 00:24:59.966 "trtype": "tcp", 00:24:59.966 "traddr": "10.0.0.2", 00:24:59.966 "adrfam": "ipv4", 00:24:59.966 "trsvcid": "4420", 00:24:59.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.966 "hostaddr": "10.0.0.2", 00:24:59.966 "hostsvcid": "60000", 00:24:59.966 "prchk_reftag": false, 00:24:59.966 "prchk_guard": false, 00:24:59.966 "hdgst": false, 00:24:59.966 "ddgst": false, 00:24:59.966 "multipath": "failover", 00:24:59.966 "method": "bdev_nvme_attach_controller", 00:24:59.966 "req_id": 1 00:24:59.966 } 00:24:59.966 Got JSON-RPC error response 00:24:59.966 response: 00:24:59.966 { 00:24:59.966 "code": -114, 00:24:59.966 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:59.966 } 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.966 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:00.226 00:25:00.226 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.226 09:33:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:00.226 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.226 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:00.226 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.226 09:33:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:00.227 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.227 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:00.227 00:25:00.227 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.227 09:33:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:00.227 09:33:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:00.227 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.227 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:00.227 09:33:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.227 09:33:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:00.227 09:33:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:01.613 0 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 787164 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 787164 ']' 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 787164 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 787164 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 787164' 00:25:01.613 killing process with pid 787164 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 787164 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 787164 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.613 09:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:25:01.614 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:01.614 [2024-07-15 09:33:46.056478] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:25:01.614 [2024-07-15 09:33:46.056528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787164 ] 00:25:01.614 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.614 [2024-07-15 09:33:46.121181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.614 [2024-07-15 09:33:46.186870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.614 [2024-07-15 09:33:47.286748] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name a88068d3-3b9e-4931-8ee8-4f28e790f634 already exists 00:25:01.614 [2024-07-15 09:33:47.286779] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:a88068d3-3b9e-4931-8ee8-4f28e790f634 alias for bdev NVMe1n1 00:25:01.614 [2024-07-15 09:33:47.286787] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:01.614 Running I/O for 1 seconds... 00:25:01.614 00:25:01.614 Latency(us) 00:25:01.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.614 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:01.614 NVMe0n1 : 1.00 27439.82 107.19 0.00 0.00 4653.78 2061.65 15182.51 00:25:01.614 =================================================================================================================== 00:25:01.614 Total : 27439.82 107.19 0.00 0.00 4653.78 2061.65 15182.51 00:25:01.614 Received shutdown signal, test time was about 1.000000 seconds 00:25:01.614 00:25:01.614 Latency(us) 00:25:01.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.614 =================================================================================================================== 00:25:01.614 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:01.614 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:01.614 rmmod nvme_tcp 00:25:01.614 rmmod nvme_fabrics 00:25:01.614 rmmod nvme_keyring 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 787034 ']' 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 787034 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 787034 ']' 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 787034 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 787034 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 787034' 00:25:01.614 killing process with pid 787034 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 787034 00:25:01.614 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 787034 00:25:01.879 09:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:01.880 09:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:01.880 09:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:01.880 09:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:01.880 09:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:01.880 09:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.880 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:01.880 09:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.845 09:33:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:03.845 00:25:03.845 real 0m14.419s 00:25:03.845 user 0m16.435s 00:25:03.845 sys 0m6.793s 00:25:03.845 09:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:03.845 09:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:03.845 ************************************ 00:25:03.845 END TEST nvmf_multicontroller 00:25:03.845 ************************************ 00:25:03.845 09:33:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:03.845 09:33:51 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:03.845 09:33:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:03.845 09:33:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:03.845 09:33:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:04.106 ************************************ 00:25:04.106 START TEST nvmf_aer 00:25:04.106 ************************************ 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:04.106 * Looking for test storage... 00:25:04.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:04.106 09:33:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.107 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:04.107 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:04.107 09:33:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:25:04.107 09:33:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:12.246 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:12.246 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:12.246 Found net devices under 0000:31:00.0: cvl_0_0 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.246 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:12.247 Found net devices under 0000:31:00.1: cvl_0_1 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:12.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:12.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:25:12.247 00:25:12.247 --- 10.0.0.2 ping statistics --- 00:25:12.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.247 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:12.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:12.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:25:12.247 00:25:12.247 --- 10.0.0.1 ping statistics --- 00:25:12.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.247 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=792425 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 792425 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 792425 ']' 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:12.247 09:33:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:12.247 [2024-07-15 09:33:59.435515] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:25:12.247 [2024-07-15 09:33:59.435580] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:12.507 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.507 [2024-07-15 09:33:59.514495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:12.507 [2024-07-15 09:33:59.589783] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:12.507 [2024-07-15 09:33:59.589819] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:12.507 [2024-07-15 09:33:59.589827] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:12.507 [2024-07-15 09:33:59.589833] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:12.507 [2024-07-15 09:33:59.589839] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:12.507 [2024-07-15 09:33:59.589981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.507 [2024-07-15 09:33:59.590094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:12.507 [2024-07-15 09:33:59.590251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.507 [2024-07-15 09:33:59.590252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:13.077 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:13.077 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:25:13.077 09:34:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:13.077 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:13.077 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.077 09:34:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.077 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:13.077 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.077 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.077 [2024-07-15 09:34:00.243280] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.077 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.077 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:13.077 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.077 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.077 Malloc0 00:25:13.077 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.077 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:13.077 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.077 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.338 [2024-07-15 09:34:00.302590] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.338 [ 00:25:13.338 { 00:25:13.338 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:13.338 "subtype": "Discovery", 00:25:13.338 "listen_addresses": [], 00:25:13.338 "allow_any_host": true, 00:25:13.338 "hosts": [] 00:25:13.338 }, 00:25:13.338 { 00:25:13.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.338 "subtype": "NVMe", 00:25:13.338 "listen_addresses": [ 00:25:13.338 { 00:25:13.338 "trtype": "TCP", 00:25:13.338 "adrfam": "IPv4", 00:25:13.338 "traddr": "10.0.0.2", 00:25:13.338 "trsvcid": "4420" 00:25:13.338 } 00:25:13.338 ], 00:25:13.338 "allow_any_host": true, 00:25:13.338 "hosts": [], 00:25:13.338 "serial_number": "SPDK00000000000001", 00:25:13.338 "model_number": "SPDK bdev Controller", 00:25:13.338 "max_namespaces": 2, 00:25:13.338 "min_cntlid": 1, 00:25:13.338 "max_cntlid": 65519, 00:25:13.338 "namespaces": [ 00:25:13.338 { 00:25:13.338 "nsid": 1, 00:25:13.338 "bdev_name": "Malloc0", 00:25:13.338 "name": "Malloc0", 00:25:13.338 "nguid": "E8499A3C66B0486D819FBCA19B19D679", 00:25:13.338 "uuid": "e8499a3c-66b0-486d-819f-bca19b19d679" 00:25:13.338 } 00:25:13.338 ] 00:25:13.338 } 00:25:13.338 ] 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=792596 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:25:13.338 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:25:13.338 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:25:13.599 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:13.599 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:25:13.599 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:25:13.599 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:25:13.599 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:13.599 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:13.599 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:25:13.599 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:13.599 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.599 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.599 Malloc1 00:25:13.599 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.599 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:13.599 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.599 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.599 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.599 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:13.599 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.599 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.599 Asynchronous Event Request test 00:25:13.599 Attaching to 10.0.0.2 00:25:13.599 Attached to 10.0.0.2 00:25:13.599 Registering asynchronous event callbacks... 00:25:13.599 Starting namespace attribute notice tests for all controllers... 00:25:13.599 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:13.599 aer_cb - Changed Namespace 00:25:13.599 Cleaning up... 00:25:13.599 [ 00:25:13.599 { 00:25:13.599 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:13.599 "subtype": "Discovery", 00:25:13.599 "listen_addresses": [], 00:25:13.599 "allow_any_host": true, 00:25:13.599 "hosts": [] 00:25:13.599 }, 00:25:13.599 { 00:25:13.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.599 "subtype": "NVMe", 00:25:13.599 "listen_addresses": [ 00:25:13.599 { 00:25:13.600 "trtype": "TCP", 00:25:13.600 "adrfam": "IPv4", 00:25:13.600 "traddr": "10.0.0.2", 00:25:13.600 "trsvcid": "4420" 00:25:13.600 } 00:25:13.600 ], 00:25:13.600 "allow_any_host": true, 00:25:13.600 "hosts": [], 00:25:13.600 "serial_number": "SPDK00000000000001", 00:25:13.600 "model_number": "SPDK bdev Controller", 00:25:13.600 "max_namespaces": 2, 00:25:13.600 "min_cntlid": 1, 00:25:13.600 "max_cntlid": 65519, 00:25:13.600 "namespaces": [ 00:25:13.600 { 00:25:13.600 "nsid": 1, 00:25:13.600 "bdev_name": "Malloc0", 00:25:13.600 "name": "Malloc0", 00:25:13.600 "nguid": "E8499A3C66B0486D819FBCA19B19D679", 00:25:13.600 "uuid": "e8499a3c-66b0-486d-819f-bca19b19d679" 00:25:13.600 }, 00:25:13.600 { 00:25:13.600 "nsid": 2, 00:25:13.600 "bdev_name": "Malloc1", 00:25:13.600 "name": "Malloc1", 00:25:13.600 "nguid": "59D843155BFB49F587DF124F0A16BBDE", 00:25:13.600 "uuid": "59d84315-5bfb-49f5-87df-124f0a16bbde" 00:25:13.600 } 00:25:13.600 ] 00:25:13.600 } 00:25:13.600 ] 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 792596 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:13.600 09:34:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:13.600 rmmod nvme_tcp 00:25:13.600 rmmod nvme_fabrics 00:25:13.867 rmmod nvme_keyring 00:25:13.868 09:34:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:13.868 09:34:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:25:13.868 09:34:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:25:13.868 09:34:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 792425 ']' 00:25:13.868 09:34:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 792425 00:25:13.868 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 792425 ']' 00:25:13.868 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 792425 00:25:13.868 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:25:13.868 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:13.868 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 792425 00:25:13.868 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:13.868 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:13.868 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 792425' 00:25:13.868 killing process with pid 792425 00:25:13.868 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 792425 00:25:13.868 09:34:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 792425 00:25:13.868 09:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:13.868 09:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:13.868 09:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:13.868 09:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:13.868 09:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:13.868 09:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.868 09:34:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.868 09:34:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.407 09:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:16.407 00:25:16.407 real 0m12.007s 00:25:16.407 user 0m8.079s 00:25:16.407 sys 0m6.424s 00:25:16.407 09:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:16.407 09:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:16.407 ************************************ 00:25:16.407 END TEST nvmf_aer 00:25:16.407 ************************************ 00:25:16.407 09:34:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:16.407 09:34:03 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:16.407 09:34:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:16.407 09:34:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:16.407 09:34:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:16.407 ************************************ 00:25:16.407 START TEST nvmf_async_init 00:25:16.407 ************************************ 00:25:16.407 09:34:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:16.407 * Looking for test storage... 00:25:16.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:16.407 09:34:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:16.407 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:16.407 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.407 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.407 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.407 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.407 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.407 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.407 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.407 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.407 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.407 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.407 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:16.407 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:16.407 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.407 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.407 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8014e73e8c2240b1926d18aa322e3ad0 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:25:16.408 09:34:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:24.544 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:24.545 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:24.545 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:24.545 Found net devices under 0000:31:00.0: cvl_0_0 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:24.545 Found net devices under 0000:31:00.1: cvl_0_1 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:24.545 09:34:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:24.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:24.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:25:24.545 00:25:24.545 --- 10.0.0.2 ping statistics --- 00:25:24.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.545 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:24.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:24.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:25:24.545 00:25:24.545 --- 10.0.0.1 ping statistics --- 00:25:24.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.545 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=797385 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 797385 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 797385 ']' 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:24.545 09:34:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:24.545 [2024-07-15 09:34:11.379221] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:25:24.545 [2024-07-15 09:34:11.379286] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.545 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.545 [2024-07-15 09:34:11.457655] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.545 [2024-07-15 09:34:11.532557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.545 [2024-07-15 09:34:11.532594] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.545 [2024-07-15 09:34:11.532601] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.545 [2024-07-15 09:34:11.532608] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.545 [2024-07-15 09:34:11.532614] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.545 [2024-07-15 09:34:11.532631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.117 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:25.117 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:25:25.117 09:34:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:25.117 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:25.117 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:25.117 09:34:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.117 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:25.117 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.117 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:25.117 [2024-07-15 09:34:12.191289] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:25.118 null0 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8014e73e8c2240b1926d18aa322e3ad0 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:25.118 [2024-07-15 09:34:12.251531] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.118 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:25.379 nvme0n1 00:25:25.379 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.379 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:25.379 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.379 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:25.379 [ 00:25:25.379 { 00:25:25.379 "name": "nvme0n1", 00:25:25.379 "aliases": [ 00:25:25.379 "8014e73e-8c22-40b1-926d-18aa322e3ad0" 00:25:25.379 ], 00:25:25.379 "product_name": "NVMe disk", 00:25:25.379 "block_size": 512, 00:25:25.379 "num_blocks": 2097152, 00:25:25.379 "uuid": "8014e73e-8c22-40b1-926d-18aa322e3ad0", 00:25:25.379 "assigned_rate_limits": { 00:25:25.379 "rw_ios_per_sec": 0, 00:25:25.379 "rw_mbytes_per_sec": 0, 00:25:25.379 "r_mbytes_per_sec": 0, 00:25:25.379 "w_mbytes_per_sec": 0 00:25:25.379 }, 00:25:25.379 "claimed": false, 00:25:25.379 "zoned": false, 00:25:25.379 "supported_io_types": { 00:25:25.379 "read": true, 00:25:25.379 "write": true, 00:25:25.379 "unmap": false, 00:25:25.379 "flush": true, 00:25:25.379 "reset": true, 00:25:25.379 "nvme_admin": true, 00:25:25.379 "nvme_io": true, 00:25:25.379 "nvme_io_md": false, 00:25:25.379 "write_zeroes": true, 00:25:25.379 "zcopy": false, 00:25:25.379 "get_zone_info": false, 00:25:25.379 "zone_management": false, 00:25:25.379 "zone_append": false, 00:25:25.379 "compare": true, 00:25:25.379 "compare_and_write": true, 00:25:25.379 "abort": true, 00:25:25.379 "seek_hole": false, 00:25:25.379 "seek_data": false, 00:25:25.379 "copy": true, 00:25:25.379 "nvme_iov_md": false 00:25:25.379 }, 00:25:25.379 "memory_domains": [ 00:25:25.379 { 00:25:25.379 "dma_device_id": "system", 00:25:25.379 "dma_device_type": 1 00:25:25.379 } 00:25:25.379 ], 00:25:25.379 "driver_specific": { 00:25:25.379 "nvme": [ 00:25:25.379 { 00:25:25.379 "trid": { 00:25:25.379 "trtype": "TCP", 00:25:25.379 "adrfam": "IPv4", 00:25:25.379 "traddr": "10.0.0.2", 00:25:25.379 "trsvcid": "4420", 00:25:25.379 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:25.379 }, 00:25:25.379 "ctrlr_data": { 00:25:25.379 "cntlid": 1, 00:25:25.379 "vendor_id": "0x8086", 00:25:25.379 "model_number": "SPDK bdev Controller", 00:25:25.379 "serial_number": "00000000000000000000", 00:25:25.379 "firmware_revision": "24.09", 00:25:25.379 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:25.379 "oacs": { 00:25:25.379 "security": 0, 00:25:25.379 "format": 0, 00:25:25.379 "firmware": 0, 00:25:25.379 "ns_manage": 0 00:25:25.379 }, 00:25:25.379 "multi_ctrlr": true, 00:25:25.379 "ana_reporting": false 00:25:25.379 }, 00:25:25.379 "vs": { 00:25:25.379 "nvme_version": "1.3" 00:25:25.379 }, 00:25:25.379 "ns_data": { 00:25:25.379 "id": 1, 00:25:25.379 "can_share": true 00:25:25.379 } 00:25:25.379 } 00:25:25.379 ], 00:25:25.379 "mp_policy": "active_passive" 00:25:25.379 } 00:25:25.379 } 00:25:25.379 ] 00:25:25.379 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.379 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:25.380 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.380 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:25.380 [2024-07-15 09:34:12.513522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:25.380 [2024-07-15 09:34:12.513584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e59f0 (9): Bad file descriptor 00:25:25.641 [2024-07-15 09:34:12.655853] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:25.641 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.641 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:25.641 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.641 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:25.641 [ 00:25:25.641 { 00:25:25.641 "name": "nvme0n1", 00:25:25.641 "aliases": [ 00:25:25.641 "8014e73e-8c22-40b1-926d-18aa322e3ad0" 00:25:25.641 ], 00:25:25.641 "product_name": "NVMe disk", 00:25:25.641 "block_size": 512, 00:25:25.641 "num_blocks": 2097152, 00:25:25.641 "uuid": "8014e73e-8c22-40b1-926d-18aa322e3ad0", 00:25:25.641 "assigned_rate_limits": { 00:25:25.641 "rw_ios_per_sec": 0, 00:25:25.641 "rw_mbytes_per_sec": 0, 00:25:25.641 "r_mbytes_per_sec": 0, 00:25:25.641 "w_mbytes_per_sec": 0 00:25:25.641 }, 00:25:25.641 "claimed": false, 00:25:25.641 "zoned": false, 00:25:25.641 "supported_io_types": { 00:25:25.641 "read": true, 00:25:25.641 "write": true, 00:25:25.641 "unmap": false, 00:25:25.641 "flush": true, 00:25:25.641 "reset": true, 00:25:25.641 "nvme_admin": true, 00:25:25.641 "nvme_io": true, 00:25:25.641 "nvme_io_md": false, 00:25:25.641 "write_zeroes": true, 00:25:25.641 "zcopy": false, 00:25:25.641 "get_zone_info": false, 00:25:25.641 "zone_management": false, 00:25:25.641 "zone_append": false, 00:25:25.641 "compare": true, 00:25:25.641 "compare_and_write": true, 00:25:25.641 "abort": true, 00:25:25.641 "seek_hole": false, 00:25:25.641 "seek_data": false, 00:25:25.641 "copy": true, 00:25:25.641 "nvme_iov_md": false 00:25:25.641 }, 00:25:25.641 "memory_domains": [ 00:25:25.641 { 00:25:25.641 "dma_device_id": "system", 00:25:25.641 "dma_device_type": 1 00:25:25.641 } 00:25:25.641 ], 00:25:25.641 "driver_specific": { 00:25:25.641 "nvme": [ 00:25:25.641 { 00:25:25.641 "trid": { 00:25:25.641 "trtype": "TCP", 00:25:25.641 "adrfam": "IPv4", 00:25:25.641 "traddr": "10.0.0.2", 00:25:25.641 "trsvcid": "4420", 00:25:25.641 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:25.641 }, 00:25:25.641 "ctrlr_data": { 00:25:25.641 "cntlid": 2, 00:25:25.641 "vendor_id": "0x8086", 00:25:25.641 "model_number": "SPDK bdev Controller", 00:25:25.641 "serial_number": "00000000000000000000", 00:25:25.641 "firmware_revision": "24.09", 00:25:25.641 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:25.641 "oacs": { 00:25:25.641 "security": 0, 00:25:25.641 "format": 0, 00:25:25.641 "firmware": 0, 00:25:25.641 "ns_manage": 0 00:25:25.641 }, 00:25:25.641 "multi_ctrlr": true, 00:25:25.641 "ana_reporting": false 00:25:25.641 }, 00:25:25.641 "vs": { 00:25:25.641 "nvme_version": "1.3" 00:25:25.641 }, 00:25:25.641 "ns_data": { 00:25:25.641 "id": 1, 00:25:25.641 "can_share": true 00:25:25.641 } 00:25:25.641 } 00:25:25.641 ], 00:25:25.641 "mp_policy": "active_passive" 00:25:25.641 } 00:25:25.641 } 00:25:25.641 ] 00:25:25.641 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.641 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.641 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.641 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:25.641 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.AxbMBt1NiZ 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.AxbMBt1NiZ 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:25.642 [2024-07-15 09:34:12.710141] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:25.642 [2024-07-15 09:34:12.710253] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AxbMBt1NiZ 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:25.642 [2024-07-15 09:34:12.718155] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AxbMBt1NiZ 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:25.642 [2024-07-15 09:34:12.726197] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:25.642 [2024-07-15 09:34:12.726234] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:25.642 nvme0n1 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:25.642 [ 00:25:25.642 { 00:25:25.642 "name": "nvme0n1", 00:25:25.642 "aliases": [ 00:25:25.642 "8014e73e-8c22-40b1-926d-18aa322e3ad0" 00:25:25.642 ], 00:25:25.642 "product_name": "NVMe disk", 00:25:25.642 "block_size": 512, 00:25:25.642 "num_blocks": 2097152, 00:25:25.642 "uuid": "8014e73e-8c22-40b1-926d-18aa322e3ad0", 00:25:25.642 "assigned_rate_limits": { 00:25:25.642 "rw_ios_per_sec": 0, 00:25:25.642 "rw_mbytes_per_sec": 0, 00:25:25.642 "r_mbytes_per_sec": 0, 00:25:25.642 "w_mbytes_per_sec": 0 00:25:25.642 }, 00:25:25.642 "claimed": false, 00:25:25.642 "zoned": false, 00:25:25.642 "supported_io_types": { 00:25:25.642 "read": true, 00:25:25.642 "write": true, 00:25:25.642 "unmap": false, 00:25:25.642 "flush": true, 00:25:25.642 "reset": true, 00:25:25.642 "nvme_admin": true, 00:25:25.642 "nvme_io": true, 00:25:25.642 "nvme_io_md": false, 00:25:25.642 "write_zeroes": true, 00:25:25.642 "zcopy": false, 00:25:25.642 "get_zone_info": false, 00:25:25.642 "zone_management": false, 00:25:25.642 "zone_append": false, 00:25:25.642 "compare": true, 00:25:25.642 "compare_and_write": true, 00:25:25.642 "abort": true, 00:25:25.642 "seek_hole": false, 00:25:25.642 "seek_data": false, 00:25:25.642 "copy": true, 00:25:25.642 "nvme_iov_md": false 00:25:25.642 }, 00:25:25.642 "memory_domains": [ 00:25:25.642 { 00:25:25.642 "dma_device_id": "system", 00:25:25.642 "dma_device_type": 1 00:25:25.642 } 00:25:25.642 ], 00:25:25.642 "driver_specific": { 00:25:25.642 "nvme": [ 00:25:25.642 { 00:25:25.642 "trid": { 00:25:25.642 "trtype": "TCP", 00:25:25.642 "adrfam": "IPv4", 00:25:25.642 "traddr": "10.0.0.2", 00:25:25.642 "trsvcid": "4421", 00:25:25.642 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:25.642 }, 00:25:25.642 "ctrlr_data": { 00:25:25.642 "cntlid": 3, 00:25:25.642 "vendor_id": "0x8086", 00:25:25.642 "model_number": "SPDK bdev Controller", 00:25:25.642 "serial_number": "00000000000000000000", 00:25:25.642 "firmware_revision": "24.09", 00:25:25.642 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:25.642 "oacs": { 00:25:25.642 "security": 0, 00:25:25.642 "format": 0, 00:25:25.642 "firmware": 0, 00:25:25.642 "ns_manage": 0 00:25:25.642 }, 00:25:25.642 "multi_ctrlr": true, 00:25:25.642 "ana_reporting": false 00:25:25.642 }, 00:25:25.642 "vs": { 00:25:25.642 "nvme_version": "1.3" 00:25:25.642 }, 00:25:25.642 "ns_data": { 00:25:25.642 "id": 1, 00:25:25.642 "can_share": true 00:25:25.642 } 00:25:25.642 } 00:25:25.642 ], 00:25:25.642 "mp_policy": "active_passive" 00:25:25.642 } 00:25:25.642 } 00:25:25.642 ] 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.AxbMBt1NiZ 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:25.642 09:34:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:25.642 rmmod nvme_tcp 00:25:25.904 rmmod nvme_fabrics 00:25:25.904 rmmod nvme_keyring 00:25:25.904 09:34:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:25.904 09:34:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:25:25.904 09:34:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:25:25.904 09:34:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 797385 ']' 00:25:25.904 09:34:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 797385 00:25:25.904 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 797385 ']' 00:25:25.904 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 797385 00:25:25.904 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:25:25.904 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:25.904 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 797385 00:25:25.904 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:25.904 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:25.904 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 797385' 00:25:25.904 killing process with pid 797385 00:25:25.904 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 797385 00:25:25.904 [2024-07-15 09:34:12.948763] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:25.904 [2024-07-15 09:34:12.948789] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:25.904 09:34:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 797385 00:25:25.904 09:34:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:25.904 09:34:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:25.904 09:34:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:25.904 09:34:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:25.904 09:34:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:25.904 09:34:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.904 09:34:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:25.904 09:34:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.455 09:34:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:28.455 00:25:28.455 real 0m11.968s 00:25:28.455 user 0m4.137s 00:25:28.455 sys 0m6.236s 00:25:28.455 09:34:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:28.455 09:34:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:28.455 ************************************ 00:25:28.455 END TEST nvmf_async_init 00:25:28.455 ************************************ 00:25:28.455 09:34:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:28.455 09:34:15 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:28.455 09:34:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:28.455 09:34:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.455 09:34:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:28.455 ************************************ 00:25:28.455 START TEST dma 00:25:28.455 ************************************ 00:25:28.455 09:34:15 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:28.455 * Looking for test storage... 00:25:28.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:28.455 09:34:15 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.455 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:25:28.455 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.455 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.455 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.455 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.455 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.455 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.455 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.455 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.455 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.455 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.455 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:28.455 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:28.455 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.455 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.455 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.455 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.455 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.455 09:34:15 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.456 09:34:15 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.456 09:34:15 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.456 09:34:15 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.456 09:34:15 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.456 09:34:15 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.456 09:34:15 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:25:28.456 09:34:15 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.456 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:25:28.456 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:28.456 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:28.456 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.456 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.456 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.456 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:28.456 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:28.456 09:34:15 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:28.456 09:34:15 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:28.456 09:34:15 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:25:28.456 00:25:28.456 real 0m0.132s 00:25:28.456 user 0m0.065s 00:25:28.456 sys 0m0.074s 00:25:28.456 09:34:15 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:28.456 09:34:15 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:25:28.456 ************************************ 00:25:28.456 END TEST dma 00:25:28.456 ************************************ 00:25:28.456 09:34:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:28.456 09:34:15 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:28.456 09:34:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:28.456 09:34:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.456 09:34:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:28.456 ************************************ 00:25:28.456 START TEST nvmf_identify 00:25:28.456 ************************************ 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:28.456 * Looking for test storage... 00:25:28.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:25:28.456 09:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:36.592 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:36.592 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:36.592 Found net devices under 0000:31:00.0: cvl_0_0 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:36.592 Found net devices under 0000:31:00.1: cvl_0_1 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:36.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:25:36.592 00:25:36.592 --- 10.0.0.2 ping statistics --- 00:25:36.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.592 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:36.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:25:36.592 00:25:36.592 --- 10.0.0.1 ping statistics --- 00:25:36.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.592 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=802341 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 802341 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 802341 ']' 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:36.592 09:34:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:36.592 [2024-07-15 09:34:23.661408] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:25:36.592 [2024-07-15 09:34:23.661488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.592 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.592 [2024-07-15 09:34:23.744074] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:36.854 [2024-07-15 09:34:23.820778] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.854 [2024-07-15 09:34:23.820820] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.854 [2024-07-15 09:34:23.820828] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.854 [2024-07-15 09:34:23.820834] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.854 [2024-07-15 09:34:23.820840] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.854 [2024-07-15 09:34:23.820926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.854 [2024-07-15 09:34:23.821040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:36.854 [2024-07-15 09:34:23.821196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.854 [2024-07-15 09:34:23.821197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:37.425 [2024-07-15 09:34:24.440200] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:37.425 Malloc0 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:37.425 [2024-07-15 09:34:24.535661] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.425 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.426 09:34:24 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:37.426 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.426 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:37.426 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.426 09:34:24 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:37.426 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.426 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:37.426 [ 00:25:37.426 { 00:25:37.426 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:37.426 "subtype": "Discovery", 00:25:37.426 "listen_addresses": [ 00:25:37.426 { 00:25:37.426 "trtype": "TCP", 00:25:37.426 "adrfam": "IPv4", 00:25:37.426 "traddr": "10.0.0.2", 00:25:37.426 "trsvcid": "4420" 00:25:37.426 } 00:25:37.426 ], 00:25:37.426 "allow_any_host": true, 00:25:37.426 "hosts": [] 00:25:37.426 }, 00:25:37.426 { 00:25:37.426 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:37.426 "subtype": "NVMe", 00:25:37.426 "listen_addresses": [ 00:25:37.426 { 00:25:37.426 "trtype": "TCP", 00:25:37.426 "adrfam": "IPv4", 00:25:37.426 "traddr": "10.0.0.2", 00:25:37.426 "trsvcid": "4420" 00:25:37.426 } 00:25:37.426 ], 00:25:37.426 "allow_any_host": true, 00:25:37.426 "hosts": [], 00:25:37.426 "serial_number": "SPDK00000000000001", 00:25:37.426 "model_number": "SPDK bdev Controller", 00:25:37.426 "max_namespaces": 32, 00:25:37.426 "min_cntlid": 1, 00:25:37.426 "max_cntlid": 65519, 00:25:37.426 "namespaces": [ 00:25:37.426 { 00:25:37.426 "nsid": 1, 00:25:37.426 "bdev_name": "Malloc0", 00:25:37.426 "name": "Malloc0", 00:25:37.426 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:37.426 "eui64": "ABCDEF0123456789", 00:25:37.426 "uuid": "fc6edf9a-360f-4dfd-8949-1f13e43fb7a5" 00:25:37.426 } 00:25:37.426 ] 00:25:37.426 } 00:25:37.426 ] 00:25:37.426 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.426 09:34:24 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:37.426 [2024-07-15 09:34:24.596328] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:25:37.426 [2024-07-15 09:34:24.596375] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid802560 ] 00:25:37.426 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.693 [2024-07-15 09:34:24.631802] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:37.693 [2024-07-15 09:34:24.631855] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:37.693 [2024-07-15 09:34:24.631860] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:37.693 [2024-07-15 09:34:24.631872] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:37.693 [2024-07-15 09:34:24.631878] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:37.693 [2024-07-15 09:34:24.632167] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:37.693 [2024-07-15 09:34:24.632195] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c17ec0 0 00:25:37.693 [2024-07-15 09:34:24.638765] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:37.693 [2024-07-15 09:34:24.638776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:37.693 [2024-07-15 09:34:24.638781] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:37.693 [2024-07-15 09:34:24.638784] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:37.693 [2024-07-15 09:34:24.638819] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.693 [2024-07-15 09:34:24.638825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.693 [2024-07-15 09:34:24.638830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c17ec0) 00:25:37.693 [2024-07-15 09:34:24.638845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:37.693 [2024-07-15 09:34:24.638860] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9ae40, cid 0, qid 0 00:25:37.693 [2024-07-15 09:34:24.645762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.693 [2024-07-15 09:34:24.645771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.693 [2024-07-15 09:34:24.645775] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.693 [2024-07-15 09:34:24.645780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9ae40) on tqpair=0x1c17ec0 00:25:37.693 [2024-07-15 09:34:24.645791] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:37.693 [2024-07-15 09:34:24.645798] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:37.693 [2024-07-15 09:34:24.645803] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:37.693 [2024-07-15 09:34:24.645816] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.693 [2024-07-15 09:34:24.645820] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.693 [2024-07-15 09:34:24.645824] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c17ec0) 00:25:37.693 [2024-07-15 09:34:24.645831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.693 [2024-07-15 09:34:24.645844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9ae40, cid 0, qid 0 00:25:37.693 [2024-07-15 09:34:24.645917] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.693 [2024-07-15 09:34:24.645923] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.693 [2024-07-15 09:34:24.645927] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.693 [2024-07-15 09:34:24.645931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9ae40) on tqpair=0x1c17ec0 00:25:37.693 [2024-07-15 09:34:24.645936] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:37.693 [2024-07-15 09:34:24.645943] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:37.693 [2024-07-15 09:34:24.645949] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.693 [2024-07-15 09:34:24.645953] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.693 [2024-07-15 09:34:24.645960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c17ec0) 00:25:37.693 [2024-07-15 09:34:24.645967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.693 [2024-07-15 09:34:24.645977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9ae40, cid 0, qid 0 00:25:37.693 [2024-07-15 09:34:24.646034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.693 [2024-07-15 09:34:24.646041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.693 [2024-07-15 09:34:24.646044] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.693 [2024-07-15 09:34:24.646048] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9ae40) on tqpair=0x1c17ec0 00:25:37.693 [2024-07-15 09:34:24.646053] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:37.693 [2024-07-15 09:34:24.646061] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:37.693 [2024-07-15 09:34:24.646068] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.693 [2024-07-15 09:34:24.646071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.693 [2024-07-15 09:34:24.646075] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c17ec0) 00:25:37.693 [2024-07-15 09:34:24.646081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.693 [2024-07-15 09:34:24.646091] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9ae40, cid 0, qid 0 00:25:37.693 [2024-07-15 09:34:24.646156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.693 [2024-07-15 09:34:24.646162] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.693 [2024-07-15 09:34:24.646166] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.693 [2024-07-15 09:34:24.646170] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9ae40) on tqpair=0x1c17ec0 00:25:37.693 [2024-07-15 09:34:24.646175] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:37.693 [2024-07-15 09:34:24.646184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.693 [2024-07-15 09:34:24.646187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.693 [2024-07-15 09:34:24.646191] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c17ec0) 00:25:37.693 [2024-07-15 09:34:24.646198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.694 [2024-07-15 09:34:24.646207] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9ae40, cid 0, qid 0 00:25:37.694 [2024-07-15 09:34:24.646264] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.694 [2024-07-15 09:34:24.646270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.694 [2024-07-15 09:34:24.646273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.646277] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9ae40) on tqpair=0x1c17ec0 00:25:37.694 [2024-07-15 09:34:24.646282] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:37.694 [2024-07-15 09:34:24.646287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:37.694 [2024-07-15 09:34:24.646294] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:37.694 [2024-07-15 09:34:24.646399] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:37.694 [2024-07-15 09:34:24.646404] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:37.694 [2024-07-15 09:34:24.646415] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.646419] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.646422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c17ec0) 00:25:37.694 [2024-07-15 09:34:24.646429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.694 [2024-07-15 09:34:24.646439] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9ae40, cid 0, qid 0 00:25:37.694 [2024-07-15 09:34:24.646498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.694 [2024-07-15 09:34:24.646504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.694 [2024-07-15 09:34:24.646508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.646512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9ae40) on tqpair=0x1c17ec0 00:25:37.694 [2024-07-15 09:34:24.646516] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:37.694 [2024-07-15 09:34:24.646525] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.646529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.646532] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c17ec0) 00:25:37.694 [2024-07-15 09:34:24.646539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.694 [2024-07-15 09:34:24.646548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9ae40, cid 0, qid 0 00:25:37.694 [2024-07-15 09:34:24.646602] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.694 [2024-07-15 09:34:24.646608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.694 [2024-07-15 09:34:24.646611] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.646615] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9ae40) on tqpair=0x1c17ec0 00:25:37.694 [2024-07-15 09:34:24.646619] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:37.694 [2024-07-15 09:34:24.646624] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:37.694 [2024-07-15 09:34:24.646631] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:37.694 [2024-07-15 09:34:24.646638] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:37.694 [2024-07-15 09:34:24.646647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.646651] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c17ec0) 00:25:37.694 [2024-07-15 09:34:24.646658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.694 [2024-07-15 09:34:24.646668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9ae40, cid 0, qid 0 00:25:37.694 [2024-07-15 09:34:24.646766] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:37.694 [2024-07-15 09:34:24.646773] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:37.694 [2024-07-15 09:34:24.646777] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.646782] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c17ec0): datao=0, datal=4096, cccid=0 00:25:37.694 [2024-07-15 09:34:24.646786] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c9ae40) on tqpair(0x1c17ec0): expected_datao=0, payload_size=4096 00:25:37.694 [2024-07-15 09:34:24.646793] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.646812] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.646817] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.687798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.694 [2024-07-15 09:34:24.687808] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.694 [2024-07-15 09:34:24.687811] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.687815] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9ae40) on tqpair=0x1c17ec0 00:25:37.694 [2024-07-15 09:34:24.687823] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:37.694 [2024-07-15 09:34:24.687831] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:37.694 [2024-07-15 09:34:24.687835] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:37.694 [2024-07-15 09:34:24.687840] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:37.694 [2024-07-15 09:34:24.687845] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:37.694 [2024-07-15 09:34:24.687849] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:37.694 [2024-07-15 09:34:24.687858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:37.694 [2024-07-15 09:34:24.687865] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.687869] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.687872] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c17ec0) 00:25:37.694 [2024-07-15 09:34:24.687880] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:37.694 [2024-07-15 09:34:24.687892] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9ae40, cid 0, qid 0 00:25:37.694 [2024-07-15 09:34:24.687954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.694 [2024-07-15 09:34:24.687961] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.694 [2024-07-15 09:34:24.687964] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.687968] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9ae40) on tqpair=0x1c17ec0 00:25:37.694 [2024-07-15 09:34:24.687976] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.687979] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.687983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c17ec0) 00:25:37.694 [2024-07-15 09:34:24.687989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.694 [2024-07-15 09:34:24.687995] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.687998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.688002] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c17ec0) 00:25:37.694 [2024-07-15 09:34:24.688008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.694 [2024-07-15 09:34:24.688013] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.688017] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.688020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c17ec0) 00:25:37.694 [2024-07-15 09:34:24.688026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.694 [2024-07-15 09:34:24.688034] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.688038] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.688041] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.694 [2024-07-15 09:34:24.688047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.694 [2024-07-15 09:34:24.688052] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:37.694 [2024-07-15 09:34:24.688062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:37.694 [2024-07-15 09:34:24.688068] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.688072] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c17ec0) 00:25:37.694 [2024-07-15 09:34:24.688078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.694 [2024-07-15 09:34:24.688090] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9ae40, cid 0, qid 0 00:25:37.694 [2024-07-15 09:34:24.688095] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9afc0, cid 1, qid 0 00:25:37.694 [2024-07-15 09:34:24.688099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b140, cid 2, qid 0 00:25:37.694 [2024-07-15 09:34:24.688104] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.694 [2024-07-15 09:34:24.688109] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b440, cid 4, qid 0 00:25:37.694 [2024-07-15 09:34:24.688215] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.694 [2024-07-15 09:34:24.688221] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.694 [2024-07-15 09:34:24.688225] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.688228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b440) on tqpair=0x1c17ec0 00:25:37.694 [2024-07-15 09:34:24.688233] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:37.694 [2024-07-15 09:34:24.688238] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:37.694 [2024-07-15 09:34:24.688248] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.694 [2024-07-15 09:34:24.688252] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c17ec0) 00:25:37.694 [2024-07-15 09:34:24.688259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.694 [2024-07-15 09:34:24.688268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b440, cid 4, qid 0 00:25:37.694 [2024-07-15 09:34:24.688338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:37.694 [2024-07-15 09:34:24.688345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:37.695 [2024-07-15 09:34:24.688348] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.688352] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c17ec0): datao=0, datal=4096, cccid=4 00:25:37.695 [2024-07-15 09:34:24.688356] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c9b440) on tqpair(0x1c17ec0): expected_datao=0, payload_size=4096 00:25:37.695 [2024-07-15 09:34:24.688360] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.688367] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.688371] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.688399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.695 [2024-07-15 09:34:24.688406] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.695 [2024-07-15 09:34:24.688409] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.688413] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b440) on tqpair=0x1c17ec0 00:25:37.695 [2024-07-15 09:34:24.688424] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:37.695 [2024-07-15 09:34:24.688445] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.688449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c17ec0) 00:25:37.695 [2024-07-15 09:34:24.688455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.695 [2024-07-15 09:34:24.688462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.688466] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.688469] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c17ec0) 00:25:37.695 [2024-07-15 09:34:24.688475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.695 [2024-07-15 09:34:24.688488] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b440, cid 4, qid 0 00:25:37.695 [2024-07-15 09:34:24.688493] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b5c0, cid 5, qid 0 00:25:37.695 [2024-07-15 09:34:24.688585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:37.695 [2024-07-15 09:34:24.688591] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:37.695 [2024-07-15 09:34:24.688595] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.688598] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c17ec0): datao=0, datal=1024, cccid=4 00:25:37.695 [2024-07-15 09:34:24.688603] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c9b440) on tqpair(0x1c17ec0): expected_datao=0, payload_size=1024 00:25:37.695 [2024-07-15 09:34:24.688607] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.688613] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.688617] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.688622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.695 [2024-07-15 09:34:24.688628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.695 [2024-07-15 09:34:24.688631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.688635] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b5c0) on tqpair=0x1c17ec0 00:25:37.695 [2024-07-15 09:34:24.730759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.695 [2024-07-15 09:34:24.730769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.695 [2024-07-15 09:34:24.730772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.730776] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b440) on tqpair=0x1c17ec0 00:25:37.695 [2024-07-15 09:34:24.730793] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.730797] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c17ec0) 00:25:37.695 [2024-07-15 09:34:24.730804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.695 [2024-07-15 09:34:24.730820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b440, cid 4, qid 0 00:25:37.695 [2024-07-15 09:34:24.730894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:37.695 [2024-07-15 09:34:24.730900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:37.695 [2024-07-15 09:34:24.730904] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.730910] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c17ec0): datao=0, datal=3072, cccid=4 00:25:37.695 [2024-07-15 09:34:24.730915] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c9b440) on tqpair(0x1c17ec0): expected_datao=0, payload_size=3072 00:25:37.695 [2024-07-15 09:34:24.730919] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.730926] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.730929] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.730994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.695 [2024-07-15 09:34:24.731001] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.695 [2024-07-15 09:34:24.731004] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.731008] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b440) on tqpair=0x1c17ec0 00:25:37.695 [2024-07-15 09:34:24.731016] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.731020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c17ec0) 00:25:37.695 [2024-07-15 09:34:24.731026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.695 [2024-07-15 09:34:24.731039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b440, cid 4, qid 0 00:25:37.695 [2024-07-15 09:34:24.731107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:37.695 [2024-07-15 09:34:24.731114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:37.695 [2024-07-15 09:34:24.731117] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.731120] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c17ec0): datao=0, datal=8, cccid=4 00:25:37.695 [2024-07-15 09:34:24.731125] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c9b440) on tqpair(0x1c17ec0): expected_datao=0, payload_size=8 00:25:37.695 [2024-07-15 09:34:24.731129] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.731135] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.731139] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.771801] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.695 [2024-07-15 09:34:24.771810] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.695 [2024-07-15 09:34:24.771813] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.695 [2024-07-15 09:34:24.771817] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b440) on tqpair=0x1c17ec0 00:25:37.695 ===================================================== 00:25:37.695 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:37.695 ===================================================== 00:25:37.695 Controller Capabilities/Features 00:25:37.695 ================================ 00:25:37.695 Vendor ID: 0000 00:25:37.695 Subsystem Vendor ID: 0000 00:25:37.695 Serial Number: .................... 00:25:37.695 Model Number: ........................................ 00:25:37.695 Firmware Version: 24.09 00:25:37.695 Recommended Arb Burst: 0 00:25:37.695 IEEE OUI Identifier: 00 00 00 00:25:37.695 Multi-path I/O 00:25:37.695 May have multiple subsystem ports: No 00:25:37.695 May have multiple controllers: No 00:25:37.695 Associated with SR-IOV VF: No 00:25:37.695 Max Data Transfer Size: 131072 00:25:37.695 Max Number of Namespaces: 0 00:25:37.695 Max Number of I/O Queues: 1024 00:25:37.695 NVMe Specification Version (VS): 1.3 00:25:37.695 NVMe Specification Version (Identify): 1.3 00:25:37.695 Maximum Queue Entries: 128 00:25:37.695 Contiguous Queues Required: Yes 00:25:37.695 Arbitration Mechanisms Supported 00:25:37.695 Weighted Round Robin: Not Supported 00:25:37.695 Vendor Specific: Not Supported 00:25:37.695 Reset Timeout: 15000 ms 00:25:37.695 Doorbell Stride: 4 bytes 00:25:37.695 NVM Subsystem Reset: Not Supported 00:25:37.695 Command Sets Supported 00:25:37.695 NVM Command Set: Supported 00:25:37.695 Boot Partition: Not Supported 00:25:37.695 Memory Page Size Minimum: 4096 bytes 00:25:37.695 Memory Page Size Maximum: 4096 bytes 00:25:37.695 Persistent Memory Region: Not Supported 00:25:37.695 Optional Asynchronous Events Supported 00:25:37.695 Namespace Attribute Notices: Not Supported 00:25:37.695 Firmware Activation Notices: Not Supported 00:25:37.695 ANA Change Notices: Not Supported 00:25:37.695 PLE Aggregate Log Change Notices: Not Supported 00:25:37.695 LBA Status Info Alert Notices: Not Supported 00:25:37.695 EGE Aggregate Log Change Notices: Not Supported 00:25:37.695 Normal NVM Subsystem Shutdown event: Not Supported 00:25:37.695 Zone Descriptor Change Notices: Not Supported 00:25:37.695 Discovery Log Change Notices: Supported 00:25:37.695 Controller Attributes 00:25:37.695 128-bit Host Identifier: Not Supported 00:25:37.695 Non-Operational Permissive Mode: Not Supported 00:25:37.695 NVM Sets: Not Supported 00:25:37.695 Read Recovery Levels: Not Supported 00:25:37.695 Endurance Groups: Not Supported 00:25:37.695 Predictable Latency Mode: Not Supported 00:25:37.695 Traffic Based Keep ALive: Not Supported 00:25:37.695 Namespace Granularity: Not Supported 00:25:37.695 SQ Associations: Not Supported 00:25:37.695 UUID List: Not Supported 00:25:37.695 Multi-Domain Subsystem: Not Supported 00:25:37.695 Fixed Capacity Management: Not Supported 00:25:37.695 Variable Capacity Management: Not Supported 00:25:37.695 Delete Endurance Group: Not Supported 00:25:37.695 Delete NVM Set: Not Supported 00:25:37.695 Extended LBA Formats Supported: Not Supported 00:25:37.695 Flexible Data Placement Supported: Not Supported 00:25:37.695 00:25:37.695 Controller Memory Buffer Support 00:25:37.695 ================================ 00:25:37.695 Supported: No 00:25:37.695 00:25:37.695 Persistent Memory Region Support 00:25:37.695 ================================ 00:25:37.695 Supported: No 00:25:37.695 00:25:37.695 Admin Command Set Attributes 00:25:37.695 ============================ 00:25:37.695 Security Send/Receive: Not Supported 00:25:37.695 Format NVM: Not Supported 00:25:37.695 Firmware Activate/Download: Not Supported 00:25:37.695 Namespace Management: Not Supported 00:25:37.695 Device Self-Test: Not Supported 00:25:37.695 Directives: Not Supported 00:25:37.695 NVMe-MI: Not Supported 00:25:37.696 Virtualization Management: Not Supported 00:25:37.696 Doorbell Buffer Config: Not Supported 00:25:37.696 Get LBA Status Capability: Not Supported 00:25:37.696 Command & Feature Lockdown Capability: Not Supported 00:25:37.696 Abort Command Limit: 1 00:25:37.696 Async Event Request Limit: 4 00:25:37.696 Number of Firmware Slots: N/A 00:25:37.696 Firmware Slot 1 Read-Only: N/A 00:25:37.696 Firmware Activation Without Reset: N/A 00:25:37.696 Multiple Update Detection Support: N/A 00:25:37.696 Firmware Update Granularity: No Information Provided 00:25:37.696 Per-Namespace SMART Log: No 00:25:37.696 Asymmetric Namespace Access Log Page: Not Supported 00:25:37.696 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:37.696 Command Effects Log Page: Not Supported 00:25:37.696 Get Log Page Extended Data: Supported 00:25:37.696 Telemetry Log Pages: Not Supported 00:25:37.696 Persistent Event Log Pages: Not Supported 00:25:37.696 Supported Log Pages Log Page: May Support 00:25:37.696 Commands Supported & Effects Log Page: Not Supported 00:25:37.696 Feature Identifiers & Effects Log Page:May Support 00:25:37.696 NVMe-MI Commands & Effects Log Page: May Support 00:25:37.696 Data Area 4 for Telemetry Log: Not Supported 00:25:37.696 Error Log Page Entries Supported: 128 00:25:37.696 Keep Alive: Not Supported 00:25:37.696 00:25:37.696 NVM Command Set Attributes 00:25:37.696 ========================== 00:25:37.696 Submission Queue Entry Size 00:25:37.696 Max: 1 00:25:37.696 Min: 1 00:25:37.696 Completion Queue Entry Size 00:25:37.696 Max: 1 00:25:37.696 Min: 1 00:25:37.696 Number of Namespaces: 0 00:25:37.696 Compare Command: Not Supported 00:25:37.696 Write Uncorrectable Command: Not Supported 00:25:37.696 Dataset Management Command: Not Supported 00:25:37.696 Write Zeroes Command: Not Supported 00:25:37.696 Set Features Save Field: Not Supported 00:25:37.696 Reservations: Not Supported 00:25:37.696 Timestamp: Not Supported 00:25:37.696 Copy: Not Supported 00:25:37.696 Volatile Write Cache: Not Present 00:25:37.696 Atomic Write Unit (Normal): 1 00:25:37.696 Atomic Write Unit (PFail): 1 00:25:37.696 Atomic Compare & Write Unit: 1 00:25:37.696 Fused Compare & Write: Supported 00:25:37.696 Scatter-Gather List 00:25:37.696 SGL Command Set: Supported 00:25:37.696 SGL Keyed: Supported 00:25:37.696 SGL Bit Bucket Descriptor: Not Supported 00:25:37.696 SGL Metadata Pointer: Not Supported 00:25:37.696 Oversized SGL: Not Supported 00:25:37.696 SGL Metadata Address: Not Supported 00:25:37.696 SGL Offset: Supported 00:25:37.696 Transport SGL Data Block: Not Supported 00:25:37.696 Replay Protected Memory Block: Not Supported 00:25:37.696 00:25:37.696 Firmware Slot Information 00:25:37.696 ========================= 00:25:37.696 Active slot: 0 00:25:37.696 00:25:37.696 00:25:37.696 Error Log 00:25:37.696 ========= 00:25:37.696 00:25:37.696 Active Namespaces 00:25:37.696 ================= 00:25:37.696 Discovery Log Page 00:25:37.696 ================== 00:25:37.696 Generation Counter: 2 00:25:37.696 Number of Records: 2 00:25:37.696 Record Format: 0 00:25:37.696 00:25:37.696 Discovery Log Entry 0 00:25:37.696 ---------------------- 00:25:37.696 Transport Type: 3 (TCP) 00:25:37.696 Address Family: 1 (IPv4) 00:25:37.696 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:37.696 Entry Flags: 00:25:37.696 Duplicate Returned Information: 1 00:25:37.696 Explicit Persistent Connection Support for Discovery: 1 00:25:37.696 Transport Requirements: 00:25:37.696 Secure Channel: Not Required 00:25:37.696 Port ID: 0 (0x0000) 00:25:37.696 Controller ID: 65535 (0xffff) 00:25:37.696 Admin Max SQ Size: 128 00:25:37.696 Transport Service Identifier: 4420 00:25:37.696 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:37.696 Transport Address: 10.0.0.2 00:25:37.696 Discovery Log Entry 1 00:25:37.696 ---------------------- 00:25:37.696 Transport Type: 3 (TCP) 00:25:37.696 Address Family: 1 (IPv4) 00:25:37.696 Subsystem Type: 2 (NVM Subsystem) 00:25:37.696 Entry Flags: 00:25:37.696 Duplicate Returned Information: 0 00:25:37.696 Explicit Persistent Connection Support for Discovery: 0 00:25:37.696 Transport Requirements: 00:25:37.696 Secure Channel: Not Required 00:25:37.696 Port ID: 0 (0x0000) 00:25:37.696 Controller ID: 65535 (0xffff) 00:25:37.696 Admin Max SQ Size: 128 00:25:37.696 Transport Service Identifier: 4420 00:25:37.696 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:37.696 Transport Address: 10.0.0.2 [2024-07-15 09:34:24.771901] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:37.696 [2024-07-15 09:34:24.771911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9ae40) on tqpair=0x1c17ec0 00:25:37.696 [2024-07-15 09:34:24.771918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.696 [2024-07-15 09:34:24.771923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9afc0) on tqpair=0x1c17ec0 00:25:37.696 [2024-07-15 09:34:24.771927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.696 [2024-07-15 09:34:24.771932] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b140) on tqpair=0x1c17ec0 00:25:37.696 [2024-07-15 09:34:24.771937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.696 [2024-07-15 09:34:24.771942] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.696 [2024-07-15 09:34:24.771946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.696 [2024-07-15 09:34:24.771958] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.696 [2024-07-15 09:34:24.771962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.696 [2024-07-15 09:34:24.771965] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.696 [2024-07-15 09:34:24.771972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.696 [2024-07-15 09:34:24.771985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.696 [2024-07-15 09:34:24.772043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.696 [2024-07-15 09:34:24.772050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.696 [2024-07-15 09:34:24.772053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.696 [2024-07-15 09:34:24.772057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.696 [2024-07-15 09:34:24.772064] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.696 [2024-07-15 09:34:24.772068] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.696 [2024-07-15 09:34:24.772072] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.696 [2024-07-15 09:34:24.772078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.696 [2024-07-15 09:34:24.772091] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.696 [2024-07-15 09:34:24.772155] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.696 [2024-07-15 09:34:24.772161] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.696 [2024-07-15 09:34:24.772165] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.696 [2024-07-15 09:34:24.772169] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.696 [2024-07-15 09:34:24.772173] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:37.696 [2024-07-15 09:34:24.772178] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:37.696 [2024-07-15 09:34:24.772187] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.696 [2024-07-15 09:34:24.772191] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.696 [2024-07-15 09:34:24.772194] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.696 [2024-07-15 09:34:24.772201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.696 [2024-07-15 09:34:24.772210] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.696 [2024-07-15 09:34:24.772271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.696 [2024-07-15 09:34:24.772277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.696 [2024-07-15 09:34:24.772281] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.696 [2024-07-15 09:34:24.772284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.696 [2024-07-15 09:34:24.772294] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.696 [2024-07-15 09:34:24.772298] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.696 [2024-07-15 09:34:24.772301] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.696 [2024-07-15 09:34:24.772308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.696 [2024-07-15 09:34:24.772318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.696 [2024-07-15 09:34:24.772373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.696 [2024-07-15 09:34:24.772379] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.696 [2024-07-15 09:34:24.772384] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.696 [2024-07-15 09:34:24.772388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.696 [2024-07-15 09:34:24.772397] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.696 [2024-07-15 09:34:24.772401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.696 [2024-07-15 09:34:24.772404] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.696 [2024-07-15 09:34:24.772411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.696 [2024-07-15 09:34:24.772421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.696 [2024-07-15 09:34:24.772482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.696 [2024-07-15 09:34:24.772488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.697 [2024-07-15 09:34:24.772491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.772495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.697 [2024-07-15 09:34:24.772504] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.772508] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.772511] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.697 [2024-07-15 09:34:24.772518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.697 [2024-07-15 09:34:24.772528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.697 [2024-07-15 09:34:24.772589] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.697 [2024-07-15 09:34:24.772595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.697 [2024-07-15 09:34:24.772598] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.772602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.697 [2024-07-15 09:34:24.772611] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.772615] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.772619] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.697 [2024-07-15 09:34:24.772625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.697 [2024-07-15 09:34:24.772635] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.697 [2024-07-15 09:34:24.772692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.697 [2024-07-15 09:34:24.772699] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.697 [2024-07-15 09:34:24.772702] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.772706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.697 [2024-07-15 09:34:24.772715] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.772719] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.772722] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.697 [2024-07-15 09:34:24.772729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.697 [2024-07-15 09:34:24.772739] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.697 [2024-07-15 09:34:24.772809] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.697 [2024-07-15 09:34:24.772815] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.697 [2024-07-15 09:34:24.772819] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.772825] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.697 [2024-07-15 09:34:24.772834] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.772838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.772841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.697 [2024-07-15 09:34:24.772848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.697 [2024-07-15 09:34:24.772858] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.697 [2024-07-15 09:34:24.772916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.697 [2024-07-15 09:34:24.772922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.697 [2024-07-15 09:34:24.772925] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.772929] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.697 [2024-07-15 09:34:24.772938] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.772942] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.772946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.697 [2024-07-15 09:34:24.772952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.697 [2024-07-15 09:34:24.772962] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.697 [2024-07-15 09:34:24.773027] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.697 [2024-07-15 09:34:24.773033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.697 [2024-07-15 09:34:24.773036] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.773040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.697 [2024-07-15 09:34:24.773049] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.773053] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.773056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.697 [2024-07-15 09:34:24.773063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.697 [2024-07-15 09:34:24.773072] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.697 [2024-07-15 09:34:24.773133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.697 [2024-07-15 09:34:24.773139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.697 [2024-07-15 09:34:24.773143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.773146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.697 [2024-07-15 09:34:24.773156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.773160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.773163] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.697 [2024-07-15 09:34:24.773170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.697 [2024-07-15 09:34:24.773179] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.697 [2024-07-15 09:34:24.773243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.697 [2024-07-15 09:34:24.773249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.697 [2024-07-15 09:34:24.773253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.773256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.697 [2024-07-15 09:34:24.773269] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.773273] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.773277] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.697 [2024-07-15 09:34:24.773283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.697 [2024-07-15 09:34:24.773293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.697 [2024-07-15 09:34:24.773354] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.697 [2024-07-15 09:34:24.773360] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.697 [2024-07-15 09:34:24.773363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.773367] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.697 [2024-07-15 09:34:24.773376] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.773380] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.773384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.697 [2024-07-15 09:34:24.773390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.697 [2024-07-15 09:34:24.773400] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.697 [2024-07-15 09:34:24.773455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.697 [2024-07-15 09:34:24.773461] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.697 [2024-07-15 09:34:24.773464] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.773468] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.697 [2024-07-15 09:34:24.773477] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.773481] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.697 [2024-07-15 09:34:24.773485] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.697 [2024-07-15 09:34:24.773491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.697 [2024-07-15 09:34:24.773501] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.697 [2024-07-15 09:34:24.773564] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.697 [2024-07-15 09:34:24.773570] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.697 [2024-07-15 09:34:24.773574] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.773577] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.698 [2024-07-15 09:34:24.773587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.773591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.773594] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.698 [2024-07-15 09:34:24.773601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.698 [2024-07-15 09:34:24.773610] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.698 [2024-07-15 09:34:24.773672] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.698 [2024-07-15 09:34:24.773678] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.698 [2024-07-15 09:34:24.773681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.773685] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.698 [2024-07-15 09:34:24.773694] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.773700] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.773703] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.698 [2024-07-15 09:34:24.773710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.698 [2024-07-15 09:34:24.773719] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.698 [2024-07-15 09:34:24.773782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.698 [2024-07-15 09:34:24.773789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.698 [2024-07-15 09:34:24.773792] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.773796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.698 [2024-07-15 09:34:24.773805] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.773809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.773812] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.698 [2024-07-15 09:34:24.773819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.698 [2024-07-15 09:34:24.773829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.698 [2024-07-15 09:34:24.773890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.698 [2024-07-15 09:34:24.773897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.698 [2024-07-15 09:34:24.773900] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.773904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.698 [2024-07-15 09:34:24.773913] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.773917] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.773920] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.698 [2024-07-15 09:34:24.773927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.698 [2024-07-15 09:34:24.773936] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.698 [2024-07-15 09:34:24.774001] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.698 [2024-07-15 09:34:24.774007] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.698 [2024-07-15 09:34:24.774010] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.698 [2024-07-15 09:34:24.774023] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774027] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774030] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.698 [2024-07-15 09:34:24.774037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.698 [2024-07-15 09:34:24.774047] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.698 [2024-07-15 09:34:24.774102] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.698 [2024-07-15 09:34:24.774108] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.698 [2024-07-15 09:34:24.774111] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.698 [2024-07-15 09:34:24.774124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774128] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774134] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.698 [2024-07-15 09:34:24.774141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.698 [2024-07-15 09:34:24.774150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.698 [2024-07-15 09:34:24.774211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.698 [2024-07-15 09:34:24.774217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.698 [2024-07-15 09:34:24.774220] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774224] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.698 [2024-07-15 09:34:24.774234] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774237] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774241] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.698 [2024-07-15 09:34:24.774247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.698 [2024-07-15 09:34:24.774257] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.698 [2024-07-15 09:34:24.774318] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.698 [2024-07-15 09:34:24.774324] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.698 [2024-07-15 09:34:24.774328] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774331] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.698 [2024-07-15 09:34:24.774341] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774348] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.698 [2024-07-15 09:34:24.774354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.698 [2024-07-15 09:34:24.774364] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.698 [2024-07-15 09:34:24.774425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.698 [2024-07-15 09:34:24.774431] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.698 [2024-07-15 09:34:24.774434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774438] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.698 [2024-07-15 09:34:24.774447] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774454] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.698 [2024-07-15 09:34:24.774461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.698 [2024-07-15 09:34:24.774470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.698 [2024-07-15 09:34:24.774523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.698 [2024-07-15 09:34:24.774529] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.698 [2024-07-15 09:34:24.774532] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774536] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.698 [2024-07-15 09:34:24.774545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774549] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774553] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.698 [2024-07-15 09:34:24.774561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.698 [2024-07-15 09:34:24.774570] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.698 [2024-07-15 09:34:24.774631] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.698 [2024-07-15 09:34:24.774637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.698 [2024-07-15 09:34:24.774641] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774645] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.698 [2024-07-15 09:34:24.774654] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.698 [2024-07-15 09:34:24.774668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.698 [2024-07-15 09:34:24.774677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.698 [2024-07-15 09:34:24.774742] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.698 [2024-07-15 09:34:24.774748] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.698 [2024-07-15 09:34:24.774758] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774762] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.698 [2024-07-15 09:34:24.774771] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774775] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.698 [2024-07-15 09:34:24.774785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.698 [2024-07-15 09:34:24.774795] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.698 [2024-07-15 09:34:24.774859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.698 [2024-07-15 09:34:24.774865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.698 [2024-07-15 09:34:24.774868] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.698 [2024-07-15 09:34:24.774881] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774885] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.698 [2024-07-15 09:34:24.774888] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.698 [2024-07-15 09:34:24.774895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.698 [2024-07-15 09:34:24.774904] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.699 [2024-07-15 09:34:24.774956] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.699 [2024-07-15 09:34:24.774962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.699 [2024-07-15 09:34:24.774966] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.774970] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.699 [2024-07-15 09:34:24.774979] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.774982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.774986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.699 [2024-07-15 09:34:24.774992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.699 [2024-07-15 09:34:24.775004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.699 [2024-07-15 09:34:24.775068] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.699 [2024-07-15 09:34:24.775074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.699 [2024-07-15 09:34:24.775078] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775082] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.699 [2024-07-15 09:34:24.775091] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775098] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.699 [2024-07-15 09:34:24.775105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.699 [2024-07-15 09:34:24.775114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.699 [2024-07-15 09:34:24.775176] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.699 [2024-07-15 09:34:24.775182] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.699 [2024-07-15 09:34:24.775185] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775189] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.699 [2024-07-15 09:34:24.775198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775202] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.699 [2024-07-15 09:34:24.775212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.699 [2024-07-15 09:34:24.775221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.699 [2024-07-15 09:34:24.775279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.699 [2024-07-15 09:34:24.775285] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.699 [2024-07-15 09:34:24.775289] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.699 [2024-07-15 09:34:24.775302] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775306] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.699 [2024-07-15 09:34:24.775316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.699 [2024-07-15 09:34:24.775325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.699 [2024-07-15 09:34:24.775381] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.699 [2024-07-15 09:34:24.775387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.699 [2024-07-15 09:34:24.775391] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775394] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.699 [2024-07-15 09:34:24.775404] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775408] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775411] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.699 [2024-07-15 09:34:24.775418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.699 [2024-07-15 09:34:24.775427] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.699 [2024-07-15 09:34:24.775499] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.699 [2024-07-15 09:34:24.775505] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.699 [2024-07-15 09:34:24.775509] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775513] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.699 [2024-07-15 09:34:24.775522] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.699 [2024-07-15 09:34:24.775536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.699 [2024-07-15 09:34:24.775545] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.699 [2024-07-15 09:34:24.775598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.699 [2024-07-15 09:34:24.775604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.699 [2024-07-15 09:34:24.775608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775611] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.699 [2024-07-15 09:34:24.775621] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775625] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.699 [2024-07-15 09:34:24.775635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.699 [2024-07-15 09:34:24.775644] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.699 [2024-07-15 09:34:24.775702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.699 [2024-07-15 09:34:24.775708] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.699 [2024-07-15 09:34:24.775712] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.699 [2024-07-15 09:34:24.775725] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775729] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.775732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.699 [2024-07-15 09:34:24.775739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.699 [2024-07-15 09:34:24.775748] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.699 [2024-07-15 09:34:24.779763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.699 [2024-07-15 09:34:24.779770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.699 [2024-07-15 09:34:24.779773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.779777] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.699 [2024-07-15 09:34:24.779787] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.779791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.779794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c17ec0) 00:25:37.699 [2024-07-15 09:34:24.779801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.699 [2024-07-15 09:34:24.779812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9b2c0, cid 3, qid 0 00:25:37.699 [2024-07-15 09:34:24.779872] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.699 [2024-07-15 09:34:24.779881] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.699 [2024-07-15 09:34:24.779884] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.779888] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9b2c0) on tqpair=0x1c17ec0 00:25:37.699 [2024-07-15 09:34:24.779895] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:25:37.699 00:25:37.699 09:34:24 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:37.699 [2024-07-15 09:34:24.819375] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:25:37.699 [2024-07-15 09:34:24.819417] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid802568 ] 00:25:37.699 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.699 [2024-07-15 09:34:24.852310] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:37.699 [2024-07-15 09:34:24.852351] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:37.699 [2024-07-15 09:34:24.852356] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:37.699 [2024-07-15 09:34:24.852368] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:37.699 [2024-07-15 09:34:24.852373] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:37.699 [2024-07-15 09:34:24.855233] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:37.699 [2024-07-15 09:34:24.855257] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19d1ec0 0 00:25:37.699 [2024-07-15 09:34:24.863760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:37.699 [2024-07-15 09:34:24.863770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:37.699 [2024-07-15 09:34:24.863774] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:37.699 [2024-07-15 09:34:24.863777] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:37.699 [2024-07-15 09:34:24.863806] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.863812] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.863815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d1ec0) 00:25:37.699 [2024-07-15 09:34:24.863827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:37.699 [2024-07-15 09:34:24.863841] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a54e40, cid 0, qid 0 00:25:37.699 [2024-07-15 09:34:24.871761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.699 [2024-07-15 09:34:24.871770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.699 [2024-07-15 09:34:24.871773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.699 [2024-07-15 09:34:24.871778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a54e40) on tqpair=0x19d1ec0 00:25:37.699 [2024-07-15 09:34:24.871788] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:37.700 [2024-07-15 09:34:24.871794] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:37.700 [2024-07-15 09:34:24.871799] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:37.700 [2024-07-15 09:34:24.871814] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.871818] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.871822] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d1ec0) 00:25:37.700 [2024-07-15 09:34:24.871830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.700 [2024-07-15 09:34:24.871842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a54e40, cid 0, qid 0 00:25:37.700 [2024-07-15 09:34:24.871904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.700 [2024-07-15 09:34:24.871911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.700 [2024-07-15 09:34:24.871915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.871918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a54e40) on tqpair=0x19d1ec0 00:25:37.700 [2024-07-15 09:34:24.871923] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:37.700 [2024-07-15 09:34:24.871930] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:37.700 [2024-07-15 09:34:24.871937] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.871940] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.871944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d1ec0) 00:25:37.700 [2024-07-15 09:34:24.871951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.700 [2024-07-15 09:34:24.871961] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a54e40, cid 0, qid 0 00:25:37.700 [2024-07-15 09:34:24.872007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.700 [2024-07-15 09:34:24.872013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.700 [2024-07-15 09:34:24.872017] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872021] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a54e40) on tqpair=0x19d1ec0 00:25:37.700 [2024-07-15 09:34:24.872026] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:37.700 [2024-07-15 09:34:24.872033] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:37.700 [2024-07-15 09:34:24.872039] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872043] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872046] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d1ec0) 00:25:37.700 [2024-07-15 09:34:24.872053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.700 [2024-07-15 09:34:24.872063] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a54e40, cid 0, qid 0 00:25:37.700 [2024-07-15 09:34:24.872112] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.700 [2024-07-15 09:34:24.872118] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.700 [2024-07-15 09:34:24.872122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872125] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a54e40) on tqpair=0x19d1ec0 00:25:37.700 [2024-07-15 09:34:24.872130] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:37.700 [2024-07-15 09:34:24.872139] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872143] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872146] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d1ec0) 00:25:37.700 [2024-07-15 09:34:24.872155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.700 [2024-07-15 09:34:24.872166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a54e40, cid 0, qid 0 00:25:37.700 [2024-07-15 09:34:24.872211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.700 [2024-07-15 09:34:24.872218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.700 [2024-07-15 09:34:24.872221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872225] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a54e40) on tqpair=0x19d1ec0 00:25:37.700 [2024-07-15 09:34:24.872229] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:37.700 [2024-07-15 09:34:24.872234] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:37.700 [2024-07-15 09:34:24.872241] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:37.700 [2024-07-15 09:34:24.872346] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:37.700 [2024-07-15 09:34:24.872350] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:37.700 [2024-07-15 09:34:24.872357] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872361] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872364] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d1ec0) 00:25:37.700 [2024-07-15 09:34:24.872371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.700 [2024-07-15 09:34:24.872381] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a54e40, cid 0, qid 0 00:25:37.700 [2024-07-15 09:34:24.872433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.700 [2024-07-15 09:34:24.872439] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.700 [2024-07-15 09:34:24.872442] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a54e40) on tqpair=0x19d1ec0 00:25:37.700 [2024-07-15 09:34:24.872451] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:37.700 [2024-07-15 09:34:24.872460] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d1ec0) 00:25:37.700 [2024-07-15 09:34:24.872473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.700 [2024-07-15 09:34:24.872483] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a54e40, cid 0, qid 0 00:25:37.700 [2024-07-15 09:34:24.872532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.700 [2024-07-15 09:34:24.872539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.700 [2024-07-15 09:34:24.872542] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a54e40) on tqpair=0x19d1ec0 00:25:37.700 [2024-07-15 09:34:24.872550] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:37.700 [2024-07-15 09:34:24.872555] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:37.700 [2024-07-15 09:34:24.872564] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:37.700 [2024-07-15 09:34:24.872571] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:37.700 [2024-07-15 09:34:24.872579] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872583] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d1ec0) 00:25:37.700 [2024-07-15 09:34:24.872589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.700 [2024-07-15 09:34:24.872599] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a54e40, cid 0, qid 0 00:25:37.700 [2024-07-15 09:34:24.872687] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:37.700 [2024-07-15 09:34:24.872694] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:37.700 [2024-07-15 09:34:24.872697] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872701] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19d1ec0): datao=0, datal=4096, cccid=0 00:25:37.700 [2024-07-15 09:34:24.872706] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a54e40) on tqpair(0x19d1ec0): expected_datao=0, payload_size=4096 00:25:37.700 [2024-07-15 09:34:24.872710] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872717] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872721] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872797] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.700 [2024-07-15 09:34:24.872804] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.700 [2024-07-15 09:34:24.872807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872811] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a54e40) on tqpair=0x19d1ec0 00:25:37.700 [2024-07-15 09:34:24.872818] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:37.700 [2024-07-15 09:34:24.872825] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:37.700 [2024-07-15 09:34:24.872829] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:37.700 [2024-07-15 09:34:24.872833] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:37.700 [2024-07-15 09:34:24.872837] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:37.700 [2024-07-15 09:34:24.872842] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:37.700 [2024-07-15 09:34:24.872849] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:37.700 [2024-07-15 09:34:24.872856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872860] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872863] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d1ec0) 00:25:37.700 [2024-07-15 09:34:24.872870] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:37.700 [2024-07-15 09:34:24.872881] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a54e40, cid 0, qid 0 00:25:37.700 [2024-07-15 09:34:24.872937] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.700 [2024-07-15 09:34:24.872944] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.700 [2024-07-15 09:34:24.872947] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872951] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a54e40) on tqpair=0x19d1ec0 00:25:37.700 [2024-07-15 09:34:24.872959] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.700 [2024-07-15 09:34:24.872967] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d1ec0) 00:25:37.700 [2024-07-15 09:34:24.872973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.700 [2024-07-15 09:34:24.872979] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.872982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.872986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19d1ec0) 00:25:37.701 [2024-07-15 09:34:24.872991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.701 [2024-07-15 09:34:24.872997] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873004] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19d1ec0) 00:25:37.701 [2024-07-15 09:34:24.873010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.701 [2024-07-15 09:34:24.873016] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d1ec0) 00:25:37.701 [2024-07-15 09:34:24.873028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.701 [2024-07-15 09:34:24.873033] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:37.701 [2024-07-15 09:34:24.873042] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:37.701 [2024-07-15 09:34:24.873049] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873052] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19d1ec0) 00:25:37.701 [2024-07-15 09:34:24.873059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.701 [2024-07-15 09:34:24.873070] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a54e40, cid 0, qid 0 00:25:37.701 [2024-07-15 09:34:24.873076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a54fc0, cid 1, qid 0 00:25:37.701 [2024-07-15 09:34:24.873080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a55140, cid 2, qid 0 00:25:37.701 [2024-07-15 09:34:24.873085] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a552c0, cid 3, qid 0 00:25:37.701 [2024-07-15 09:34:24.873089] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a55440, cid 4, qid 0 00:25:37.701 [2024-07-15 09:34:24.873194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.701 [2024-07-15 09:34:24.873200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.701 [2024-07-15 09:34:24.873204] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a55440) on tqpair=0x19d1ec0 00:25:37.701 [2024-07-15 09:34:24.873212] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:37.701 [2024-07-15 09:34:24.873217] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:37.701 [2024-07-15 09:34:24.873225] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:37.701 [2024-07-15 09:34:24.873233] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:37.701 [2024-07-15 09:34:24.873239] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873243] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873247] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19d1ec0) 00:25:37.701 [2024-07-15 09:34:24.873253] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:37.701 [2024-07-15 09:34:24.873263] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a55440, cid 4, qid 0 00:25:37.701 [2024-07-15 09:34:24.873315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.701 [2024-07-15 09:34:24.873322] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.701 [2024-07-15 09:34:24.873325] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873329] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a55440) on tqpair=0x19d1ec0 00:25:37.701 [2024-07-15 09:34:24.873390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:37.701 [2024-07-15 09:34:24.873399] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:37.701 [2024-07-15 09:34:24.873406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19d1ec0) 00:25:37.701 [2024-07-15 09:34:24.873416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.701 [2024-07-15 09:34:24.873425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a55440, cid 4, qid 0 00:25:37.701 [2024-07-15 09:34:24.873481] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:37.701 [2024-07-15 09:34:24.873487] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:37.701 [2024-07-15 09:34:24.873491] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873494] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19d1ec0): datao=0, datal=4096, cccid=4 00:25:37.701 [2024-07-15 09:34:24.873499] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a55440) on tqpair(0x19d1ec0): expected_datao=0, payload_size=4096 00:25:37.701 [2024-07-15 09:34:24.873503] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873509] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873513] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873527] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.701 [2024-07-15 09:34:24.873533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.701 [2024-07-15 09:34:24.873536] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873540] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a55440) on tqpair=0x19d1ec0 00:25:37.701 [2024-07-15 09:34:24.873548] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:37.701 [2024-07-15 09:34:24.873561] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:37.701 [2024-07-15 09:34:24.873570] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:37.701 [2024-07-15 09:34:24.873577] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873580] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19d1ec0) 00:25:37.701 [2024-07-15 09:34:24.873588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.701 [2024-07-15 09:34:24.873599] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a55440, cid 4, qid 0 00:25:37.701 [2024-07-15 09:34:24.873666] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:37.701 [2024-07-15 09:34:24.873673] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:37.701 [2024-07-15 09:34:24.873676] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873680] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19d1ec0): datao=0, datal=4096, cccid=4 00:25:37.701 [2024-07-15 09:34:24.873684] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a55440) on tqpair(0x19d1ec0): expected_datao=0, payload_size=4096 00:25:37.701 [2024-07-15 09:34:24.873688] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873695] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873698] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873781] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.701 [2024-07-15 09:34:24.873788] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.701 [2024-07-15 09:34:24.873791] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873795] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a55440) on tqpair=0x19d1ec0 00:25:37.701 [2024-07-15 09:34:24.873807] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:37.701 [2024-07-15 09:34:24.873816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:37.701 [2024-07-15 09:34:24.873823] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873827] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19d1ec0) 00:25:37.701 [2024-07-15 09:34:24.873833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.701 [2024-07-15 09:34:24.873844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a55440, cid 4, qid 0 00:25:37.701 [2024-07-15 09:34:24.873907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:37.701 [2024-07-15 09:34:24.873913] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:37.701 [2024-07-15 09:34:24.873917] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873920] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19d1ec0): datao=0, datal=4096, cccid=4 00:25:37.701 [2024-07-15 09:34:24.873924] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a55440) on tqpair(0x19d1ec0): expected_datao=0, payload_size=4096 00:25:37.701 [2024-07-15 09:34:24.873928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873935] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.873938] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.874002] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.701 [2024-07-15 09:34:24.874008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.701 [2024-07-15 09:34:24.874012] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.874015] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a55440) on tqpair=0x19d1ec0 00:25:37.701 [2024-07-15 09:34:24.874022] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:37.701 [2024-07-15 09:34:24.874030] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:37.701 [2024-07-15 09:34:24.874041] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:37.701 [2024-07-15 09:34:24.874048] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:37.701 [2024-07-15 09:34:24.874053] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:37.701 [2024-07-15 09:34:24.874058] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:37.701 [2024-07-15 09:34:24.874063] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:37.701 [2024-07-15 09:34:24.874067] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:37.701 [2024-07-15 09:34:24.874072] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:37.701 [2024-07-15 09:34:24.874085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.701 [2024-07-15 09:34:24.874089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19d1ec0) 00:25:37.701 [2024-07-15 09:34:24.874095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.702 [2024-07-15 09:34:24.874101] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874105] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19d1ec0) 00:25:37.702 [2024-07-15 09:34:24.874114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.702 [2024-07-15 09:34:24.874127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a55440, cid 4, qid 0 00:25:37.702 [2024-07-15 09:34:24.874132] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a555c0, cid 5, qid 0 00:25:37.702 [2024-07-15 09:34:24.874198] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.702 [2024-07-15 09:34:24.874204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.702 [2024-07-15 09:34:24.874207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a55440) on tqpair=0x19d1ec0 00:25:37.702 [2024-07-15 09:34:24.874218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.702 [2024-07-15 09:34:24.874223] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.702 [2024-07-15 09:34:24.874227] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874230] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a555c0) on tqpair=0x19d1ec0 00:25:37.702 [2024-07-15 09:34:24.874239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874243] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19d1ec0) 00:25:37.702 [2024-07-15 09:34:24.874249] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.702 [2024-07-15 09:34:24.874259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a555c0, cid 5, qid 0 00:25:37.702 [2024-07-15 09:34:24.874343] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.702 [2024-07-15 09:34:24.874349] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.702 [2024-07-15 09:34:24.874352] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874356] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a555c0) on tqpair=0x19d1ec0 00:25:37.702 [2024-07-15 09:34:24.874365] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874369] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19d1ec0) 00:25:37.702 [2024-07-15 09:34:24.874377] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.702 [2024-07-15 09:34:24.874386] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a555c0, cid 5, qid 0 00:25:37.702 [2024-07-15 09:34:24.874437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.702 [2024-07-15 09:34:24.874443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.702 [2024-07-15 09:34:24.874446] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874450] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a555c0) on tqpair=0x19d1ec0 00:25:37.702 [2024-07-15 09:34:24.874459] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874462] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19d1ec0) 00:25:37.702 [2024-07-15 09:34:24.874468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.702 [2024-07-15 09:34:24.874478] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a555c0, cid 5, qid 0 00:25:37.702 [2024-07-15 09:34:24.874528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.702 [2024-07-15 09:34:24.874534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.702 [2024-07-15 09:34:24.874537] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874541] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a555c0) on tqpair=0x19d1ec0 00:25:37.702 [2024-07-15 09:34:24.874554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874558] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19d1ec0) 00:25:37.702 [2024-07-15 09:34:24.874565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.702 [2024-07-15 09:34:24.874572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874575] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19d1ec0) 00:25:37.702 [2024-07-15 09:34:24.874581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.702 [2024-07-15 09:34:24.874589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x19d1ec0) 00:25:37.702 [2024-07-15 09:34:24.874598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.702 [2024-07-15 09:34:24.874605] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19d1ec0) 00:25:37.702 [2024-07-15 09:34:24.874615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.702 [2024-07-15 09:34:24.874625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a555c0, cid 5, qid 0 00:25:37.702 [2024-07-15 09:34:24.874630] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a55440, cid 4, qid 0 00:25:37.702 [2024-07-15 09:34:24.874635] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a55740, cid 6, qid 0 00:25:37.702 [2024-07-15 09:34:24.874640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a558c0, cid 7, qid 0 00:25:37.702 [2024-07-15 09:34:24.874743] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:37.702 [2024-07-15 09:34:24.874750] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:37.702 [2024-07-15 09:34:24.874761] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874765] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19d1ec0): datao=0, datal=8192, cccid=5 00:25:37.702 [2024-07-15 09:34:24.874769] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a555c0) on tqpair(0x19d1ec0): expected_datao=0, payload_size=8192 00:25:37.702 [2024-07-15 09:34:24.874774] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874853] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874858] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874863] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:37.702 [2024-07-15 09:34:24.874869] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:37.702 [2024-07-15 09:34:24.874872] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874876] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19d1ec0): datao=0, datal=512, cccid=4 00:25:37.702 [2024-07-15 09:34:24.874880] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a55440) on tqpair(0x19d1ec0): expected_datao=0, payload_size=512 00:25:37.702 [2024-07-15 09:34:24.874884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874891] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874894] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874900] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:37.702 [2024-07-15 09:34:24.874905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:37.702 [2024-07-15 09:34:24.874908] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874912] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19d1ec0): datao=0, datal=512, cccid=6 00:25:37.702 [2024-07-15 09:34:24.874916] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a55740) on tqpair(0x19d1ec0): expected_datao=0, payload_size=512 00:25:37.702 [2024-07-15 09:34:24.874920] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874926] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874930] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874935] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:37.702 [2024-07-15 09:34:24.874941] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:37.702 [2024-07-15 09:34:24.874944] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874948] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19d1ec0): datao=0, datal=4096, cccid=7 00:25:37.702 [2024-07-15 09:34:24.874952] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a558c0) on tqpair(0x19d1ec0): expected_datao=0, payload_size=4096 00:25:37.702 [2024-07-15 09:34:24.874956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874962] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874966] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.702 [2024-07-15 09:34:24.874986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.702 [2024-07-15 09:34:24.874989] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.874993] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a555c0) on tqpair=0x19d1ec0 00:25:37.702 [2024-07-15 09:34:24.875004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.702 [2024-07-15 09:34:24.875010] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.702 [2024-07-15 09:34:24.875013] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.702 [2024-07-15 09:34:24.875017] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a55440) on tqpair=0x19d1ec0 00:25:37.702 [2024-07-15 09:34:24.875027] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.703 [2024-07-15 09:34:24.875034] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.703 [2024-07-15 09:34:24.875037] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.703 [2024-07-15 09:34:24.875041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a55740) on tqpair=0x19d1ec0 00:25:37.703 [2024-07-15 09:34:24.875048] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.703 [2024-07-15 09:34:24.875054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.703 [2024-07-15 09:34:24.875057] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.703 [2024-07-15 09:34:24.875061] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a558c0) on tqpair=0x19d1ec0 00:25:37.703 ===================================================== 00:25:37.703 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:37.703 ===================================================== 00:25:37.703 Controller Capabilities/Features 00:25:37.703 ================================ 00:25:37.703 Vendor ID: 8086 00:25:37.703 Subsystem Vendor ID: 8086 00:25:37.703 Serial Number: SPDK00000000000001 00:25:37.703 Model Number: SPDK bdev Controller 00:25:37.703 Firmware Version: 24.09 00:25:37.703 Recommended Arb Burst: 6 00:25:37.703 IEEE OUI Identifier: e4 d2 5c 00:25:37.703 Multi-path I/O 00:25:37.703 May have multiple subsystem ports: Yes 00:25:37.703 May have multiple controllers: Yes 00:25:37.703 Associated with SR-IOV VF: No 00:25:37.703 Max Data Transfer Size: 131072 00:25:37.703 Max Number of Namespaces: 32 00:25:37.703 Max Number of I/O Queues: 127 00:25:37.703 NVMe Specification Version (VS): 1.3 00:25:37.703 NVMe Specification Version (Identify): 1.3 00:25:37.703 Maximum Queue Entries: 128 00:25:37.703 Contiguous Queues Required: Yes 00:25:37.703 Arbitration Mechanisms Supported 00:25:37.703 Weighted Round Robin: Not Supported 00:25:37.703 Vendor Specific: Not Supported 00:25:37.703 Reset Timeout: 15000 ms 00:25:37.703 Doorbell Stride: 4 bytes 00:25:37.703 NVM Subsystem Reset: Not Supported 00:25:37.703 Command Sets Supported 00:25:37.703 NVM Command Set: Supported 00:25:37.703 Boot Partition: Not Supported 00:25:37.703 Memory Page Size Minimum: 4096 bytes 00:25:37.703 Memory Page Size Maximum: 4096 bytes 00:25:37.703 Persistent Memory Region: Not Supported 00:25:37.703 Optional Asynchronous Events Supported 00:25:37.703 Namespace Attribute Notices: Supported 00:25:37.703 Firmware Activation Notices: Not Supported 00:25:37.703 ANA Change Notices: Not Supported 00:25:37.703 PLE Aggregate Log Change Notices: Not Supported 00:25:37.703 LBA Status Info Alert Notices: Not Supported 00:25:37.703 EGE Aggregate Log Change Notices: Not Supported 00:25:37.703 Normal NVM Subsystem Shutdown event: Not Supported 00:25:37.703 Zone Descriptor Change Notices: Not Supported 00:25:37.703 Discovery Log Change Notices: Not Supported 00:25:37.703 Controller Attributes 00:25:37.703 128-bit Host Identifier: Supported 00:25:37.703 Non-Operational Permissive Mode: Not Supported 00:25:37.703 NVM Sets: Not Supported 00:25:37.703 Read Recovery Levels: Not Supported 00:25:37.703 Endurance Groups: Not Supported 00:25:37.703 Predictable Latency Mode: Not Supported 00:25:37.703 Traffic Based Keep ALive: Not Supported 00:25:37.703 Namespace Granularity: Not Supported 00:25:37.703 SQ Associations: Not Supported 00:25:37.703 UUID List: Not Supported 00:25:37.703 Multi-Domain Subsystem: Not Supported 00:25:37.703 Fixed Capacity Management: Not Supported 00:25:37.703 Variable Capacity Management: Not Supported 00:25:37.703 Delete Endurance Group: Not Supported 00:25:37.703 Delete NVM Set: Not Supported 00:25:37.703 Extended LBA Formats Supported: Not Supported 00:25:37.703 Flexible Data Placement Supported: Not Supported 00:25:37.703 00:25:37.703 Controller Memory Buffer Support 00:25:37.703 ================================ 00:25:37.703 Supported: No 00:25:37.703 00:25:37.703 Persistent Memory Region Support 00:25:37.703 ================================ 00:25:37.703 Supported: No 00:25:37.703 00:25:37.703 Admin Command Set Attributes 00:25:37.703 ============================ 00:25:37.703 Security Send/Receive: Not Supported 00:25:37.703 Format NVM: Not Supported 00:25:37.703 Firmware Activate/Download: Not Supported 00:25:37.703 Namespace Management: Not Supported 00:25:37.703 Device Self-Test: Not Supported 00:25:37.703 Directives: Not Supported 00:25:37.703 NVMe-MI: Not Supported 00:25:37.703 Virtualization Management: Not Supported 00:25:37.703 Doorbell Buffer Config: Not Supported 00:25:37.703 Get LBA Status Capability: Not Supported 00:25:37.703 Command & Feature Lockdown Capability: Not Supported 00:25:37.703 Abort Command Limit: 4 00:25:37.703 Async Event Request Limit: 4 00:25:37.703 Number of Firmware Slots: N/A 00:25:37.703 Firmware Slot 1 Read-Only: N/A 00:25:37.703 Firmware Activation Without Reset: N/A 00:25:37.703 Multiple Update Detection Support: N/A 00:25:37.703 Firmware Update Granularity: No Information Provided 00:25:37.703 Per-Namespace SMART Log: No 00:25:37.703 Asymmetric Namespace Access Log Page: Not Supported 00:25:37.703 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:37.703 Command Effects Log Page: Supported 00:25:37.703 Get Log Page Extended Data: Supported 00:25:37.703 Telemetry Log Pages: Not Supported 00:25:37.703 Persistent Event Log Pages: Not Supported 00:25:37.703 Supported Log Pages Log Page: May Support 00:25:37.703 Commands Supported & Effects Log Page: Not Supported 00:25:37.703 Feature Identifiers & Effects Log Page:May Support 00:25:37.703 NVMe-MI Commands & Effects Log Page: May Support 00:25:37.703 Data Area 4 for Telemetry Log: Not Supported 00:25:37.703 Error Log Page Entries Supported: 128 00:25:37.703 Keep Alive: Supported 00:25:37.703 Keep Alive Granularity: 10000 ms 00:25:37.703 00:25:37.703 NVM Command Set Attributes 00:25:37.703 ========================== 00:25:37.703 Submission Queue Entry Size 00:25:37.703 Max: 64 00:25:37.703 Min: 64 00:25:37.703 Completion Queue Entry Size 00:25:37.703 Max: 16 00:25:37.703 Min: 16 00:25:37.703 Number of Namespaces: 32 00:25:37.703 Compare Command: Supported 00:25:37.703 Write Uncorrectable Command: Not Supported 00:25:37.703 Dataset Management Command: Supported 00:25:37.703 Write Zeroes Command: Supported 00:25:37.703 Set Features Save Field: Not Supported 00:25:37.703 Reservations: Supported 00:25:37.703 Timestamp: Not Supported 00:25:37.703 Copy: Supported 00:25:37.703 Volatile Write Cache: Present 00:25:37.703 Atomic Write Unit (Normal): 1 00:25:37.703 Atomic Write Unit (PFail): 1 00:25:37.703 Atomic Compare & Write Unit: 1 00:25:37.703 Fused Compare & Write: Supported 00:25:37.703 Scatter-Gather List 00:25:37.703 SGL Command Set: Supported 00:25:37.703 SGL Keyed: Supported 00:25:37.703 SGL Bit Bucket Descriptor: Not Supported 00:25:37.703 SGL Metadata Pointer: Not Supported 00:25:37.703 Oversized SGL: Not Supported 00:25:37.703 SGL Metadata Address: Not Supported 00:25:37.703 SGL Offset: Supported 00:25:37.703 Transport SGL Data Block: Not Supported 00:25:37.703 Replay Protected Memory Block: Not Supported 00:25:37.703 00:25:37.703 Firmware Slot Information 00:25:37.703 ========================= 00:25:37.703 Active slot: 1 00:25:37.703 Slot 1 Firmware Revision: 24.09 00:25:37.703 00:25:37.703 00:25:37.703 Commands Supported and Effects 00:25:37.703 ============================== 00:25:37.703 Admin Commands 00:25:37.703 -------------- 00:25:37.703 Get Log Page (02h): Supported 00:25:37.703 Identify (06h): Supported 00:25:37.703 Abort (08h): Supported 00:25:37.703 Set Features (09h): Supported 00:25:37.703 Get Features (0Ah): Supported 00:25:37.703 Asynchronous Event Request (0Ch): Supported 00:25:37.703 Keep Alive (18h): Supported 00:25:37.703 I/O Commands 00:25:37.703 ------------ 00:25:37.703 Flush (00h): Supported LBA-Change 00:25:37.703 Write (01h): Supported LBA-Change 00:25:37.703 Read (02h): Supported 00:25:37.703 Compare (05h): Supported 00:25:37.703 Write Zeroes (08h): Supported LBA-Change 00:25:37.703 Dataset Management (09h): Supported LBA-Change 00:25:37.703 Copy (19h): Supported LBA-Change 00:25:37.703 00:25:37.703 Error Log 00:25:37.703 ========= 00:25:37.703 00:25:37.703 Arbitration 00:25:37.703 =========== 00:25:37.703 Arbitration Burst: 1 00:25:37.703 00:25:37.703 Power Management 00:25:37.703 ================ 00:25:37.703 Number of Power States: 1 00:25:37.703 Current Power State: Power State #0 00:25:37.703 Power State #0: 00:25:37.703 Max Power: 0.00 W 00:25:37.703 Non-Operational State: Operational 00:25:37.703 Entry Latency: Not Reported 00:25:37.703 Exit Latency: Not Reported 00:25:37.703 Relative Read Throughput: 0 00:25:37.703 Relative Read Latency: 0 00:25:37.703 Relative Write Throughput: 0 00:25:37.703 Relative Write Latency: 0 00:25:37.703 Idle Power: Not Reported 00:25:37.703 Active Power: Not Reported 00:25:37.703 Non-Operational Permissive Mode: Not Supported 00:25:37.703 00:25:37.703 Health Information 00:25:37.703 ================== 00:25:37.703 Critical Warnings: 00:25:37.703 Available Spare Space: OK 00:25:37.703 Temperature: OK 00:25:37.703 Device Reliability: OK 00:25:37.703 Read Only: No 00:25:37.703 Volatile Memory Backup: OK 00:25:37.703 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:37.703 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:37.703 Available Spare: 0% 00:25:37.703 Available Spare Threshold: 0% 00:25:37.703 Life Percentage Used:[2024-07-15 09:34:24.875158] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.704 [2024-07-15 09:34:24.875164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19d1ec0) 00:25:37.704 [2024-07-15 09:34:24.875170] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.704 [2024-07-15 09:34:24.875181] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a558c0, cid 7, qid 0 00:25:37.704 [2024-07-15 09:34:24.875244] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.704 [2024-07-15 09:34:24.875250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.704 [2024-07-15 09:34:24.875254] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.704 [2024-07-15 09:34:24.875257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a558c0) on tqpair=0x19d1ec0 00:25:37.704 [2024-07-15 09:34:24.875288] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:37.704 [2024-07-15 09:34:24.875298] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a54e40) on tqpair=0x19d1ec0 00:25:37.704 [2024-07-15 09:34:24.875304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.704 [2024-07-15 09:34:24.875309] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a54fc0) on tqpair=0x19d1ec0 00:25:37.704 [2024-07-15 09:34:24.875314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.704 [2024-07-15 09:34:24.875319] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a55140) on tqpair=0x19d1ec0 00:25:37.704 [2024-07-15 09:34:24.875323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.704 [2024-07-15 09:34:24.875328] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a552c0) on tqpair=0x19d1ec0 00:25:37.704 [2024-07-15 09:34:24.875333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.704 [2024-07-15 09:34:24.875340] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.704 [2024-07-15 09:34:24.875344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.704 [2024-07-15 09:34:24.875347] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d1ec0) 00:25:37.704 [2024-07-15 09:34:24.875354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.704 [2024-07-15 09:34:24.875365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a552c0, cid 3, qid 0 00:25:37.704 [2024-07-15 09:34:24.875416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.704 [2024-07-15 09:34:24.875422] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.704 [2024-07-15 09:34:24.875425] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.704 [2024-07-15 09:34:24.875429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a552c0) on tqpair=0x19d1ec0 00:25:37.704 [2024-07-15 09:34:24.875436] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.704 [2024-07-15 09:34:24.875440] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.704 [2024-07-15 09:34:24.875445] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d1ec0) 00:25:37.704 [2024-07-15 09:34:24.875452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.704 [2024-07-15 09:34:24.875465] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a552c0, cid 3, qid 0 00:25:37.704 [2024-07-15 09:34:24.875520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.704 [2024-07-15 09:34:24.875526] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.704 [2024-07-15 09:34:24.875529] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.704 [2024-07-15 09:34:24.875533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a552c0) on tqpair=0x19d1ec0 00:25:37.704 [2024-07-15 09:34:24.875538] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:37.704 [2024-07-15 09:34:24.875542] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:37.704 [2024-07-15 09:34:24.875551] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.704 [2024-07-15 09:34:24.875555] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.704 [2024-07-15 09:34:24.875558] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d1ec0) 00:25:37.704 [2024-07-15 09:34:24.875565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.704 [2024-07-15 09:34:24.875574] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a552c0, cid 3, qid 0 00:25:37.704 [2024-07-15 09:34:24.875629] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.704 [2024-07-15 09:34:24.875635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.704 [2024-07-15 09:34:24.875638] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.704 [2024-07-15 09:34:24.875642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a552c0) on tqpair=0x19d1ec0 00:25:37.704 [2024-07-15 09:34:24.875652] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.704 [2024-07-15 09:34:24.875656] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.704 [2024-07-15 09:34:24.875659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d1ec0) 00:25:37.704 [2024-07-15 09:34:24.875666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.704 [2024-07-15 09:34:24.875675] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a552c0, cid 3, qid 0 00:25:37.704 [2024-07-15 09:34:24.875724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.704 [2024-07-15 09:34:24.875731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.704 [2024-07-15 09:34:24.875734] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.704 [2024-07-15 09:34:24.875738] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a552c0) on tqpair=0x19d1ec0 00:25:37.704 [2024-07-15 09:34:24.875747] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:37.704 [2024-07-15 09:34:24.879758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:37.704 [2024-07-15 09:34:24.879763] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d1ec0) 00:25:37.704 [2024-07-15 09:34:24.879770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.704 [2024-07-15 09:34:24.879781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a552c0, cid 3, qid 0 00:25:37.704 [2024-07-15 09:34:24.879833] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:37.704 [2024-07-15 09:34:24.879839] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:37.704 [2024-07-15 09:34:24.879842] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:37.704 [2024-07-15 09:34:24.879846] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a552c0) on tqpair=0x19d1ec0 00:25:37.704 [2024-07-15 09:34:24.879855] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:25:37.965 0% 00:25:37.965 Data Units Read: 0 00:25:37.965 Data Units Written: 0 00:25:37.965 Host Read Commands: 0 00:25:37.965 Host Write Commands: 0 00:25:37.965 Controller Busy Time: 0 minutes 00:25:37.965 Power Cycles: 0 00:25:37.965 Power On Hours: 0 hours 00:25:37.965 Unsafe Shutdowns: 0 00:25:37.965 Unrecoverable Media Errors: 0 00:25:37.965 Lifetime Error Log Entries: 0 00:25:37.965 Warning Temperature Time: 0 minutes 00:25:37.965 Critical Temperature Time: 0 minutes 00:25:37.965 00:25:37.965 Number of Queues 00:25:37.965 ================ 00:25:37.965 Number of I/O Submission Queues: 127 00:25:37.965 Number of I/O Completion Queues: 127 00:25:37.965 00:25:37.965 Active Namespaces 00:25:37.965 ================= 00:25:37.965 Namespace ID:1 00:25:37.965 Error Recovery Timeout: Unlimited 00:25:37.965 Command Set Identifier: NVM (00h) 00:25:37.965 Deallocate: Supported 00:25:37.965 Deallocated/Unwritten Error: Not Supported 00:25:37.965 Deallocated Read Value: Unknown 00:25:37.965 Deallocate in Write Zeroes: Not Supported 00:25:37.965 Deallocated Guard Field: 0xFFFF 00:25:37.965 Flush: Supported 00:25:37.965 Reservation: Supported 00:25:37.965 Namespace Sharing Capabilities: Multiple Controllers 00:25:37.965 Size (in LBAs): 131072 (0GiB) 00:25:37.965 Capacity (in LBAs): 131072 (0GiB) 00:25:37.965 Utilization (in LBAs): 131072 (0GiB) 00:25:37.965 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:37.965 EUI64: ABCDEF0123456789 00:25:37.965 UUID: fc6edf9a-360f-4dfd-8949-1f13e43fb7a5 00:25:37.965 Thin Provisioning: Not Supported 00:25:37.965 Per-NS Atomic Units: Yes 00:25:37.965 Atomic Boundary Size (Normal): 0 00:25:37.965 Atomic Boundary Size (PFail): 0 00:25:37.965 Atomic Boundary Offset: 0 00:25:37.965 Maximum Single Source Range Length: 65535 00:25:37.965 Maximum Copy Length: 65535 00:25:37.965 Maximum Source Range Count: 1 00:25:37.965 NGUID/EUI64 Never Reused: No 00:25:37.965 Namespace Write Protected: No 00:25:37.965 Number of LBA Formats: 1 00:25:37.965 Current LBA Format: LBA Format #00 00:25:37.965 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:37.965 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:37.965 rmmod nvme_tcp 00:25:37.965 rmmod nvme_fabrics 00:25:37.965 rmmod nvme_keyring 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 802341 ']' 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 802341 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 802341 ']' 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 802341 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:37.965 09:34:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 802341 00:25:37.965 09:34:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:37.965 09:34:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:37.965 09:34:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 802341' 00:25:37.965 killing process with pid 802341 00:25:37.965 09:34:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 802341 00:25:37.965 09:34:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 802341 00:25:38.227 09:34:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:38.227 09:34:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:38.227 09:34:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:38.227 09:34:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:38.227 09:34:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:38.227 09:34:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.227 09:34:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:38.227 09:34:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.166 09:34:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:40.166 00:25:40.166 real 0m11.814s 00:25:40.166 user 0m7.703s 00:25:40.166 sys 0m6.364s 00:25:40.166 09:34:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:40.166 09:34:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:40.166 ************************************ 00:25:40.166 END TEST nvmf_identify 00:25:40.166 ************************************ 00:25:40.166 09:34:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:40.166 09:34:27 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:40.166 09:34:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:40.166 09:34:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.166 09:34:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:40.166 ************************************ 00:25:40.166 START TEST nvmf_perf 00:25:40.166 ************************************ 00:25:40.166 09:34:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:40.435 * Looking for test storage... 00:25:40.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:40.435 09:34:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:48.576 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:48.576 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:48.576 Found net devices under 0000:31:00.0: cvl_0_0 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:48.576 Found net devices under 0000:31:00.1: cvl_0_1 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:48.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:25:48.576 00:25:48.576 --- 10.0.0.2 ping statistics --- 00:25:48.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.576 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:25:48.576 00:25:48.576 --- 10.0.0.1 ping statistics --- 00:25:48.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.576 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:48.576 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.577 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:48.577 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:48.577 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.577 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:48.577 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:48.577 09:34:35 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:48.577 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:48.577 09:34:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:48.577 09:34:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:48.577 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:48.577 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=807236 00:25:48.577 09:34:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 807236 00:25:48.577 09:34:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 807236 ']' 00:25:48.577 09:34:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.577 09:34:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:48.577 09:34:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.577 09:34:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:48.577 09:34:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:48.577 [2024-07-15 09:34:35.632254] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:25:48.577 [2024-07-15 09:34:35.632303] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.577 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.577 [2024-07-15 09:34:35.697520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:48.577 [2024-07-15 09:34:35.763658] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.577 [2024-07-15 09:34:35.763692] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.577 [2024-07-15 09:34:35.763699] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.577 [2024-07-15 09:34:35.763705] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.577 [2024-07-15 09:34:35.763714] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.577 [2024-07-15 09:34:35.763794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.577 [2024-07-15 09:34:35.763872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:48.577 [2024-07-15 09:34:35.764008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.577 [2024-07-15 09:34:35.764009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:49.519 09:34:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:49.519 09:34:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:25:49.519 09:34:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:49.519 09:34:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:49.519 09:34:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:49.519 09:34:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.519 09:34:36 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:49.519 09:34:36 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:49.781 09:34:36 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:49.781 09:34:36 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:50.042 09:34:37 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:50.042 09:34:37 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:50.304 09:34:37 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:50.304 09:34:37 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:50.304 09:34:37 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:50.304 09:34:37 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:50.304 09:34:37 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:50.304 [2024-07-15 09:34:37.452769] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.304 09:34:37 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:50.565 09:34:37 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:50.565 09:34:37 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:50.825 09:34:37 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:50.825 09:34:37 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:50.825 09:34:37 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:51.086 [2024-07-15 09:34:38.135247] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.086 09:34:38 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:51.348 09:34:38 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:51.348 09:34:38 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:51.348 09:34:38 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:51.348 09:34:38 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:52.734 Initializing NVMe Controllers 00:25:52.734 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:52.734 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:52.734 Initialization complete. Launching workers. 00:25:52.734 ======================================================== 00:25:52.734 Latency(us) 00:25:52.734 Device Information : IOPS MiB/s Average min max 00:25:52.734 PCIE (0000:65:00.0) NSID 1 from core 0: 79670.95 311.21 401.26 13.40 4919.60 00:25:52.734 ======================================================== 00:25:52.734 Total : 79670.95 311.21 401.26 13.40 4919.60 00:25:52.734 00:25:52.734 09:34:39 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:52.734 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.678 Initializing NVMe Controllers 00:25:53.678 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:53.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:53.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:53.678 Initialization complete. Launching workers. 00:25:53.678 ======================================================== 00:25:53.678 Latency(us) 00:25:53.678 Device Information : IOPS MiB/s Average min max 00:25:53.678 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 90.00 0.35 11112.38 197.20 46127.79 00:25:53.678 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16469.00 7948.31 47904.48 00:25:53.678 ======================================================== 00:25:53.678 Total : 151.00 0.59 13276.31 197.20 47904.48 00:25:53.678 00:25:53.678 09:34:40 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:53.939 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.324 Initializing NVMe Controllers 00:25:55.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:55.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:55.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:55.324 Initialization complete. Launching workers. 00:25:55.324 ======================================================== 00:25:55.324 Latency(us) 00:25:55.324 Device Information : IOPS MiB/s Average min max 00:25:55.324 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11405.00 44.55 2807.16 336.48 6605.80 00:25:55.324 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3724.00 14.55 8630.43 4215.03 16239.52 00:25:55.324 ======================================================== 00:25:55.324 Total : 15129.00 59.10 4240.55 336.48 16239.52 00:25:55.324 00:25:55.324 09:34:42 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:55.324 09:34:42 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:55.324 09:34:42 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:55.324 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.892 Initializing NVMe Controllers 00:25:57.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:57.892 Controller IO queue size 128, less than required. 00:25:57.892 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:57.892 Controller IO queue size 128, less than required. 00:25:57.892 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:57.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:57.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:57.892 Initialization complete. Launching workers. 00:25:57.892 ======================================================== 00:25:57.892 Latency(us) 00:25:57.892 Device Information : IOPS MiB/s Average min max 00:25:57.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1494.63 373.66 86887.91 49294.18 134474.13 00:25:57.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 608.93 152.23 225223.41 64174.22 358513.33 00:25:57.892 ======================================================== 00:25:57.892 Total : 2103.56 525.89 126932.40 49294.18 358513.33 00:25:57.892 00:25:57.892 09:34:44 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:57.892 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.152 No valid NVMe controllers or AIO or URING devices found 00:25:58.152 Initializing NVMe Controllers 00:25:58.152 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:58.152 Controller IO queue size 128, less than required. 00:25:58.152 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:58.152 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:58.152 Controller IO queue size 128, less than required. 00:25:58.152 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:58.152 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:58.152 WARNING: Some requested NVMe devices were skipped 00:25:58.152 09:34:45 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:58.152 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.692 Initializing NVMe Controllers 00:26:00.692 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:00.692 Controller IO queue size 128, less than required. 00:26:00.692 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:00.692 Controller IO queue size 128, less than required. 00:26:00.692 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:00.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:00.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:00.692 Initialization complete. Launching workers. 00:26:00.692 00:26:00.692 ==================== 00:26:00.692 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:00.692 TCP transport: 00:26:00.692 polls: 25487 00:26:00.692 idle_polls: 10664 00:26:00.692 sock_completions: 14823 00:26:00.692 nvme_completions: 5709 00:26:00.692 submitted_requests: 8646 00:26:00.692 queued_requests: 1 00:26:00.692 00:26:00.692 ==================== 00:26:00.692 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:00.692 TCP transport: 00:26:00.692 polls: 25959 00:26:00.692 idle_polls: 13192 00:26:00.692 sock_completions: 12767 00:26:00.692 nvme_completions: 6087 00:26:00.692 submitted_requests: 9186 00:26:00.692 queued_requests: 1 00:26:00.692 ======================================================== 00:26:00.692 Latency(us) 00:26:00.692 Device Information : IOPS MiB/s Average min max 00:26:00.692 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1427.00 356.75 90872.17 48480.63 163602.23 00:26:00.692 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1521.50 380.37 85234.54 41426.63 138155.11 00:26:00.692 ======================================================== 00:26:00.692 Total : 2948.49 737.12 87963.01 41426.63 163602.23 00:26:00.692 00:26:00.692 09:34:47 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:00.692 09:34:47 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:00.954 rmmod nvme_tcp 00:26:00.954 rmmod nvme_fabrics 00:26:00.954 rmmod nvme_keyring 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 807236 ']' 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 807236 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 807236 ']' 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 807236 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 807236 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 807236' 00:26:00.954 killing process with pid 807236 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 807236 00:26:00.954 09:34:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 807236 00:26:03.496 09:34:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:03.496 09:34:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:03.496 09:34:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:03.496 09:34:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:03.496 09:34:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:03.496 09:34:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.496 09:34:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:03.496 09:34:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.407 09:34:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:05.407 00:26:05.407 real 0m24.862s 00:26:05.407 user 0m59.134s 00:26:05.407 sys 0m8.511s 00:26:05.407 09:34:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:05.407 09:34:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:05.407 ************************************ 00:26:05.407 END TEST nvmf_perf 00:26:05.407 ************************************ 00:26:05.407 09:34:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:05.407 09:34:52 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:05.407 09:34:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:05.407 09:34:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:05.407 09:34:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:05.407 ************************************ 00:26:05.407 START TEST nvmf_fio_host 00:26:05.407 ************************************ 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:05.407 * Looking for test storage... 00:26:05.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:05.407 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:05.408 09:34:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:05.408 09:34:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:26:05.408 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:05.408 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.408 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:05.408 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:05.408 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:05.408 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.408 09:34:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:05.408 09:34:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.408 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:05.408 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:05.408 09:34:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:05.408 09:34:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:13.544 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:13.545 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:13.545 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:13.545 Found net devices under 0000:31:00.0: cvl_0_0 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:13.545 Found net devices under 0000:31:00.1: cvl_0_1 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:13.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:26:13.545 00:26:13.545 --- 10.0.0.2 ping statistics --- 00:26:13.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.545 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:26:13.545 00:26:13.545 --- 10.0.0.1 ping statistics --- 00:26:13.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.545 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:13.545 09:35:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:13.546 09:35:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:26:13.546 09:35:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:13.546 09:35:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:13.546 09:35:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.546 09:35:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=814680 00:26:13.546 09:35:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:13.546 09:35:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:13.546 09:35:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 814680 00:26:13.546 09:35:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 814680 ']' 00:26:13.546 09:35:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.546 09:35:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:13.546 09:35:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.546 09:35:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:13.546 09:35:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.546 [2024-07-15 09:35:00.670982] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:26:13.546 [2024-07-15 09:35:00.671053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.546 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.807 [2024-07-15 09:35:00.753155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:13.807 [2024-07-15 09:35:00.829033] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.807 [2024-07-15 09:35:00.829074] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.807 [2024-07-15 09:35:00.829082] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.807 [2024-07-15 09:35:00.829089] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.807 [2024-07-15 09:35:00.829095] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.807 [2024-07-15 09:35:00.829268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.807 [2024-07-15 09:35:00.829413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.807 [2024-07-15 09:35:00.829570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.807 [2024-07-15 09:35:00.829572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:14.404 09:35:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:14.404 09:35:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:26:14.404 09:35:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:14.404 [2024-07-15 09:35:01.585672] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.664 09:35:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:14.664 09:35:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:14.664 09:35:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.664 09:35:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:14.664 Malloc1 00:26:14.664 09:35:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:14.926 09:35:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:15.186 09:35:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:15.186 [2024-07-15 09:35:02.299083] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.186 09:35:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:15.446 09:35:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:15.706 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:15.706 fio-3.35 00:26:15.706 Starting 1 thread 00:26:15.706 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.245 00:26:18.245 test: (groupid=0, jobs=1): err= 0: pid=815499: Mon Jul 15 09:35:05 2024 00:26:18.245 read: IOPS=13.8k, BW=53.7MiB/s (56.3MB/s)(108MiB/2004msec) 00:26:18.245 slat (usec): min=2, max=285, avg= 2.19, stdev= 2.44 00:26:18.245 clat (usec): min=3163, max=9278, avg=5127.82, stdev=392.62 00:26:18.245 lat (usec): min=3165, max=9284, avg=5130.01, stdev=392.85 00:26:18.245 clat percentiles (usec): 00:26:18.245 | 1.00th=[ 4228], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:26:18.245 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5211], 00:26:18.245 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:26:18.245 | 99.00th=[ 5997], 99.50th=[ 6521], 99.90th=[ 8455], 99.95th=[ 8848], 00:26:18.245 | 99.99th=[ 8979] 00:26:18.245 bw ( KiB/s): min=54064, max=55400, per=99.99%, avg=55008.00, stdev=631.78, samples=4 00:26:18.245 iops : min=13516, max=13850, avg=13752.00, stdev=157.95, samples=4 00:26:18.245 write: IOPS=13.7k, BW=53.7MiB/s (56.3MB/s)(108MiB/2004msec); 0 zone resets 00:26:18.245 slat (usec): min=2, max=275, avg= 2.28, stdev= 1.83 00:26:18.245 clat (usec): min=2421, max=8132, avg=4161.21, stdev=345.67 00:26:18.245 lat (usec): min=2424, max=8143, avg=4163.49, stdev=346.00 00:26:18.245 clat percentiles (usec): 00:26:18.245 | 1.00th=[ 3458], 5.00th=[ 3720], 10.00th=[ 3818], 20.00th=[ 3916], 00:26:18.245 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:26:18.245 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4621], 00:26:18.245 | 99.00th=[ 4883], 99.50th=[ 6194], 99.90th=[ 7308], 99.95th=[ 7504], 00:26:18.245 | 99.99th=[ 8029] 00:26:18.245 bw ( KiB/s): min=54432, max=55312, per=99.95%, avg=54920.00, stdev=371.86, samples=4 00:26:18.245 iops : min=13608, max=13828, avg=13730.00, stdev=92.97, samples=4 00:26:18.245 lat (msec) : 4=14.52%, 10=85.48% 00:26:18.245 cpu : usr=74.64%, sys=23.41%, ctx=24, majf=0, minf=6 00:26:18.245 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:18.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:18.245 issued rwts: total=27563,27528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.245 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:18.245 00:26:18.245 Run status group 0 (all jobs): 00:26:18.245 READ: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=108MiB (113MB), run=2004-2004msec 00:26:18.245 WRITE: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=108MiB (113MB), run=2004-2004msec 00:26:18.245 09:35:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:18.245 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:18.245 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:18.245 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:18.245 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:18.245 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:18.245 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:26:18.245 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:18.246 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:18.246 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:18.246 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:26:18.246 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:18.246 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:18.246 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:18.246 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:18.246 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:18.246 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:18.246 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:18.246 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:18.246 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:18.246 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:18.246 09:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:18.505 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:18.505 fio-3.35 00:26:18.505 Starting 1 thread 00:26:18.505 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.061 00:26:21.061 test: (groupid=0, jobs=1): err= 0: pid=816028: Mon Jul 15 09:35:07 2024 00:26:21.061 read: IOPS=9376, BW=147MiB/s (154MB/s)(294MiB/2007msec) 00:26:21.061 slat (usec): min=3, max=113, avg= 3.63, stdev= 1.58 00:26:21.061 clat (usec): min=1400, max=15317, avg=8268.25, stdev=1949.69 00:26:21.061 lat (usec): min=1403, max=15321, avg=8271.89, stdev=1949.79 00:26:21.061 clat percentiles (usec): 00:26:21.061 | 1.00th=[ 4228], 5.00th=[ 5276], 10.00th=[ 5735], 20.00th=[ 6456], 00:26:21.061 | 30.00th=[ 7111], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 8848], 00:26:21.061 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10814], 95.00th=[11338], 00:26:21.061 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14222], 99.95th=[14615], 00:26:21.061 | 99.99th=[14615] 00:26:21.061 bw ( KiB/s): min=63264, max=90144, per=49.10%, avg=73656.00, stdev=11670.58, samples=4 00:26:21.061 iops : min= 3954, max= 5634, avg=4603.50, stdev=729.41, samples=4 00:26:21.061 write: IOPS=5709, BW=89.2MiB/s (93.5MB/s)(151MiB/1693msec); 0 zone resets 00:26:21.061 slat (usec): min=40, max=328, avg=40.99, stdev= 6.76 00:26:21.061 clat (usec): min=2030, max=15948, avg=9499.98, stdev=1595.61 00:26:21.061 lat (usec): min=2070, max=15988, avg=9540.98, stdev=1596.44 00:26:21.061 clat percentiles (usec): 00:26:21.061 | 1.00th=[ 6128], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 8160], 00:26:21.061 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:26:21.061 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11469], 95.00th=[12125], 00:26:21.061 | 99.00th=[13829], 99.50th=[14222], 99.90th=[15008], 99.95th=[15270], 00:26:21.061 | 99.99th=[15926] 00:26:21.061 bw ( KiB/s): min=65632, max=92832, per=84.14%, avg=76864.00, stdev=11543.00, samples=4 00:26:21.061 iops : min= 4104, max= 5802, avg=4804.00, stdev=720.82, samples=4 00:26:21.061 lat (msec) : 2=0.04%, 4=0.58%, 10=74.76%, 20=24.62% 00:26:21.061 cpu : usr=85.09%, sys=13.41%, ctx=17, majf=0, minf=22 00:26:21.061 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:21.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:21.061 issued rwts: total=18819,9666,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.061 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:21.061 00:26:21.061 Run status group 0 (all jobs): 00:26:21.061 READ: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=294MiB (308MB), run=2007-2007msec 00:26:21.061 WRITE: bw=89.2MiB/s (93.5MB/s), 89.2MiB/s-89.2MiB/s (93.5MB/s-93.5MB/s), io=151MiB (158MB), run=1693-1693msec 00:26:21.061 09:35:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:21.061 09:35:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:21.061 09:35:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:21.061 09:35:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:21.062 09:35:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:21.062 09:35:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:21.062 09:35:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:26:21.062 09:35:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:21.062 09:35:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:26:21.062 09:35:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:21.062 09:35:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:21.062 rmmod nvme_tcp 00:26:21.062 rmmod nvme_fabrics 00:26:21.062 rmmod nvme_keyring 00:26:21.062 09:35:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:21.062 09:35:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:26:21.062 09:35:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:26:21.062 09:35:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 814680 ']' 00:26:21.062 09:35:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 814680 00:26:21.062 09:35:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 814680 ']' 00:26:21.062 09:35:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 814680 00:26:21.062 09:35:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:26:21.062 09:35:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:21.062 09:35:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 814680 00:26:21.321 09:35:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:21.321 09:35:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:21.321 09:35:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 814680' 00:26:21.321 killing process with pid 814680 00:26:21.321 09:35:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 814680 00:26:21.321 09:35:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 814680 00:26:21.321 09:35:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:21.321 09:35:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:21.321 09:35:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:21.321 09:35:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:21.321 09:35:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:21.321 09:35:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.321 09:35:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:21.321 09:35:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.860 09:35:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:23.860 00:26:23.860 real 0m18.206s 00:26:23.860 user 1m7.518s 00:26:23.860 sys 0m8.056s 00:26:23.860 09:35:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:23.860 09:35:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.860 ************************************ 00:26:23.860 END TEST nvmf_fio_host 00:26:23.860 ************************************ 00:26:23.860 09:35:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:23.860 09:35:10 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:23.860 09:35:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:23.860 09:35:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:23.860 09:35:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:23.860 ************************************ 00:26:23.860 START TEST nvmf_failover 00:26:23.860 ************************************ 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:23.860 * Looking for test storage... 00:26:23.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.860 09:35:10 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:26:23.861 09:35:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:31.995 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:31.996 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:31.996 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:31.996 Found net devices under 0000:31:00.0: cvl_0_0 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:31.996 Found net devices under 0000:31:00.1: cvl_0_1 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:31.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:31.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:26:31.996 00:26:31.996 --- 10.0.0.2 ping statistics --- 00:26:31.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.996 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:31.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:31.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:26:31.996 00:26:31.996 --- 10.0.0.1 ping statistics --- 00:26:31.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.996 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=821273 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 821273 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 821273 ']' 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:31.996 09:35:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:31.996 [2024-07-15 09:35:18.946909] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:26:31.996 [2024-07-15 09:35:18.946975] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:31.996 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.996 [2024-07-15 09:35:19.027985] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:31.996 [2024-07-15 09:35:19.121792] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:31.996 [2024-07-15 09:35:19.121851] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:31.996 [2024-07-15 09:35:19.121861] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:31.996 [2024-07-15 09:35:19.121868] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:31.996 [2024-07-15 09:35:19.121874] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:31.996 [2024-07-15 09:35:19.122009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:31.996 [2024-07-15 09:35:19.122170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.996 [2024-07-15 09:35:19.122171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:32.566 09:35:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:32.566 09:35:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:26:32.566 09:35:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:32.566 09:35:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:32.566 09:35:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:32.827 09:35:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.827 09:35:19 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:32.827 [2024-07-15 09:35:19.907983] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.827 09:35:19 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:33.087 Malloc0 00:26:33.087 09:35:20 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:33.348 09:35:20 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:33.348 09:35:20 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:33.608 [2024-07-15 09:35:20.571286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.608 09:35:20 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:33.608 [2024-07-15 09:35:20.707606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:33.608 09:35:20 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:33.869 [2024-07-15 09:35:20.848051] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:33.869 09:35:20 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=821700 00:26:33.869 09:35:20 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:33.869 09:35:20 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:33.869 09:35:20 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 821700 /var/tmp/bdevperf.sock 00:26:33.869 09:35:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 821700 ']' 00:26:33.869 09:35:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:33.869 09:35:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:33.869 09:35:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:33.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:33.869 09:35:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:33.869 09:35:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:34.809 09:35:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:34.809 09:35:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:26:34.809 09:35:21 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:34.810 NVMe0n1 00:26:34.810 09:35:22 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:35.381 00:26:35.381 09:35:22 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=821961 00:26:35.381 09:35:22 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:35.381 09:35:22 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:36.323 09:35:23 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.323 [2024-07-15 09:35:23.511331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.323 [2024-07-15 09:35:23.511678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.324 [2024-07-15 09:35:23.511684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.324 [2024-07-15 09:35:23.511688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.324 [2024-07-15 09:35:23.511692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.324 [2024-07-15 09:35:23.511697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.324 [2024-07-15 09:35:23.511701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.324 [2024-07-15 09:35:23.511706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.324 [2024-07-15 09:35:23.511710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.324 [2024-07-15 09:35:23.511715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.324 [2024-07-15 09:35:23.511719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.324 [2024-07-15 09:35:23.511724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.324 [2024-07-15 09:35:23.511728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.324 [2024-07-15 09:35:23.511733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.324 [2024-07-15 09:35:23.511738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.324 [2024-07-15 09:35:23.511742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.324 [2024-07-15 09:35:23.511747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.324 [2024-07-15 09:35:23.511756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.324 [2024-07-15 09:35:23.511760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.324 [2024-07-15 09:35:23.511764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456770 is same with the state(5) to be set 00:26:36.584 09:35:23 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:39.883 09:35:26 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:39.883 00:26:39.883 09:35:26 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:39.883 [2024-07-15 09:35:27.000111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.883 [2024-07-15 09:35:27.000142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.883 [2024-07-15 09:35:27.000148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 [2024-07-15 09:35:27.000405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457e70 is same with the state(5) to be set 00:26:39.884 09:35:27 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:43.180 09:35:30 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.180 [2024-07-15 09:35:30.173028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.180 09:35:30 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:44.121 09:35:31 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:44.381 [2024-07-15 09:35:31.347581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458550 is same with the state(5) to be set 00:26:44.381 [2024-07-15 09:35:31.347612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458550 is same with the state(5) to be set 00:26:44.381 [2024-07-15 09:35:31.347617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458550 is same with the state(5) to be set 00:26:44.381 [2024-07-15 09:35:31.347622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458550 is same with the state(5) to be set 00:26:44.381 [2024-07-15 09:35:31.347627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458550 is same with the state(5) to be set 00:26:44.381 [2024-07-15 09:35:31.347632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458550 is same with the state(5) to be set 00:26:44.381 [2024-07-15 09:35:31.347636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458550 is same with the state(5) to be set 00:26:44.381 09:35:31 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 821961 00:26:51.020 0 00:26:51.020 09:35:37 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 821700 00:26:51.020 09:35:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 821700 ']' 00:26:51.020 09:35:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 821700 00:26:51.020 09:35:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:51.020 09:35:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:51.020 09:35:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 821700 00:26:51.020 09:35:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:51.020 09:35:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:51.020 09:35:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 821700' 00:26:51.020 killing process with pid 821700 00:26:51.020 09:35:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 821700 00:26:51.020 09:35:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 821700 00:26:51.020 09:35:37 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:51.020 [2024-07-15 09:35:20.918696] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:26:51.020 [2024-07-15 09:35:20.918770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid821700 ] 00:26:51.020 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.021 [2024-07-15 09:35:20.984509] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.021 [2024-07-15 09:35:21.048773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.021 Running I/O for 15 seconds... 00:26:51.021 [2024-07-15 09:35:23.512119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.021 [2024-07-15 09:35:23.512843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.021 [2024-07-15 09:35:23.512850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.512859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.512866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.512875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.512882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.512896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.512904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.512913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.512921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.512931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.512937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.512947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.512956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.512967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.512974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.512983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.512992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.022 [2024-07-15 09:35:23.513377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.022 [2024-07-15 09:35:23.513553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.022 [2024-07-15 09:35:23.513560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.513984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.513991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.514000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.514007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.514016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.023 [2024-07-15 09:35:23.514022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.514032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.023 [2024-07-15 09:35:23.514040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.514050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.023 [2024-07-15 09:35:23.514057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.514065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.023 [2024-07-15 09:35:23.514072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.514082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.023 [2024-07-15 09:35:23.514088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.514098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.023 [2024-07-15 09:35:23.514105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.514113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.023 [2024-07-15 09:35:23.514120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.514130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.023 [2024-07-15 09:35:23.514137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.514146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.023 [2024-07-15 09:35:23.514152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.514163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.023 [2024-07-15 09:35:23.514169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.514179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.023 [2024-07-15 09:35:23.514186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.514195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.023 [2024-07-15 09:35:23.514202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.514210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.023 [2024-07-15 09:35:23.514217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.514227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.023 [2024-07-15 09:35:23.514233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.514243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.023 [2024-07-15 09:35:23.514253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.023 [2024-07-15 09:35:23.514262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:23.514269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:23.514278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:23.514286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:23.514304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.024 [2024-07-15 09:35:23.514311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.024 [2024-07-15 09:35:23.514318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97424 len:8 PRP1 0x0 PRP2 0x0 00:26:51.024 [2024-07-15 09:35:23.514326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:23.514363] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c13df0 was disconnected and freed. reset controller. 00:26:51.024 [2024-07-15 09:35:23.514372] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:51.024 [2024-07-15 09:35:23.514391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.024 [2024-07-15 09:35:23.514399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:23.514407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.024 [2024-07-15 09:35:23.514415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:23.514423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.024 [2024-07-15 09:35:23.514430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:23.514438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.024 [2024-07-15 09:35:23.514444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:23.514452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.024 [2024-07-15 09:35:23.518012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.024 [2024-07-15 09:35:23.518038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c17ea0 (9): Bad file descriptor 00:26:51.024 [2024-07-15 09:35:23.559015] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:51.024 [2024-07-15 09:35:27.004279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.024 [2024-07-15 09:35:27.004319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.024 [2024-07-15 09:35:27.004853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.024 [2024-07-15 09:35:27.004860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.004869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.025 [2024-07-15 09:35:27.004876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.004886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.025 [2024-07-15 09:35:27.004894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.004904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.004911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.004920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.004927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.004936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.004943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.004952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.004960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.004968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.004976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.004985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.004993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.025 [2024-07-15 09:35:27.005334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.025 [2024-07-15 09:35:27.005343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.026 [2024-07-15 09:35:27.005350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.026 [2024-07-15 09:35:27.005366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.026 [2024-07-15 09:35:27.005382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.026 [2024-07-15 09:35:27.005398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.026 [2024-07-15 09:35:27.005416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.026 [2024-07-15 09:35:27.005443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31536 len:8 PRP1 0x0 PRP2 0x0 00:26:51.026 [2024-07-15 09:35:27.005450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.026 [2024-07-15 09:35:27.005466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.026 [2024-07-15 09:35:27.005472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31544 len:8 PRP1 0x0 PRP2 0x0 00:26:51.026 [2024-07-15 09:35:27.005480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.026 [2024-07-15 09:35:27.005493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.026 [2024-07-15 09:35:27.005499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31552 len:8 PRP1 0x0 PRP2 0x0 00:26:51.026 [2024-07-15 09:35:27.005506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.026 [2024-07-15 09:35:27.005519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.026 [2024-07-15 09:35:27.005525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31560 len:8 PRP1 0x0 PRP2 0x0 00:26:51.026 [2024-07-15 09:35:27.005533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.026 [2024-07-15 09:35:27.005546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.026 [2024-07-15 09:35:27.005552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31568 len:8 PRP1 0x0 PRP2 0x0 00:26:51.026 [2024-07-15 09:35:27.005559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.026 [2024-07-15 09:35:27.005572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.026 [2024-07-15 09:35:27.005581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31576 len:8 PRP1 0x0 PRP2 0x0 00:26:51.026 [2024-07-15 09:35:27.005588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.026 [2024-07-15 09:35:27.005604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.026 [2024-07-15 09:35:27.005611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31584 len:8 PRP1 0x0 PRP2 0x0 00:26:51.026 [2024-07-15 09:35:27.005618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.026 [2024-07-15 09:35:27.005633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.026 [2024-07-15 09:35:27.005639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31592 len:8 PRP1 0x0 PRP2 0x0 00:26:51.026 [2024-07-15 09:35:27.005647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.026 [2024-07-15 09:35:27.005659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.026 [2024-07-15 09:35:27.005665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31600 len:8 PRP1 0x0 PRP2 0x0 00:26:51.026 [2024-07-15 09:35:27.005672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.026 [2024-07-15 09:35:27.005685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.026 [2024-07-15 09:35:27.005692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31608 len:8 PRP1 0x0 PRP2 0x0 00:26:51.026 [2024-07-15 09:35:27.005699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.026 [2024-07-15 09:35:27.005712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.026 [2024-07-15 09:35:27.005718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31616 len:8 PRP1 0x0 PRP2 0x0 00:26:51.026 [2024-07-15 09:35:27.005725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.026 [2024-07-15 09:35:27.005737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.026 [2024-07-15 09:35:27.005743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31624 len:8 PRP1 0x0 PRP2 0x0 00:26:51.026 [2024-07-15 09:35:27.005754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.026 [2024-07-15 09:35:27.005767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.026 [2024-07-15 09:35:27.005773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31632 len:8 PRP1 0x0 PRP2 0x0 00:26:51.026 [2024-07-15 09:35:27.005780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.026 [2024-07-15 09:35:27.005793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.026 [2024-07-15 09:35:27.005799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31640 len:8 PRP1 0x0 PRP2 0x0 00:26:51.026 [2024-07-15 09:35:27.005805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.026 [2024-07-15 09:35:27.005818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.026 [2024-07-15 09:35:27.005824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31648 len:8 PRP1 0x0 PRP2 0x0 00:26:51.026 [2024-07-15 09:35:27.005832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.026 [2024-07-15 09:35:27.005847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.026 [2024-07-15 09:35:27.005853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31656 len:8 PRP1 0x0 PRP2 0x0 00:26:51.026 [2024-07-15 09:35:27.005861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.026 [2024-07-15 09:35:27.005875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.026 [2024-07-15 09:35:27.005881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31664 len:8 PRP1 0x0 PRP2 0x0 00:26:51.026 [2024-07-15 09:35:27.005888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.026 [2024-07-15 09:35:27.005901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.026 [2024-07-15 09:35:27.005907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31672 len:8 PRP1 0x0 PRP2 0x0 00:26:51.026 [2024-07-15 09:35:27.005915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.026 [2024-07-15 09:35:27.005923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.026 [2024-07-15 09:35:27.005929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.026 [2024-07-15 09:35:27.005935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31680 len:8 PRP1 0x0 PRP2 0x0 00:26:51.026 [2024-07-15 09:35:27.005941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.005948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.005954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.005960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31688 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.005967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.005975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.005980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.005986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31696 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.005993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31704 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31712 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31720 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31728 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31736 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31744 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31752 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31760 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31768 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31776 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31784 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31792 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31800 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31808 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31816 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31824 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31832 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31840 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31848 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31856 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31864 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31872 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.027 [2024-07-15 09:35:27.006575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.027 [2024-07-15 09:35:27.006580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.027 [2024-07-15 09:35:27.006586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31880 len:8 PRP1 0x0 PRP2 0x0 00:26:51.027 [2024-07-15 09:35:27.006592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.006600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.028 [2024-07-15 09:35:27.006606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.028 [2024-07-15 09:35:27.006612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31888 len:8 PRP1 0x0 PRP2 0x0 00:26:51.028 [2024-07-15 09:35:27.006619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.006626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.028 [2024-07-15 09:35:27.006631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.028 [2024-07-15 09:35:27.017088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31896 len:8 PRP1 0x0 PRP2 0x0 00:26:51.028 [2024-07-15 09:35:27.017116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.028 [2024-07-15 09:35:27.017139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.028 [2024-07-15 09:35:27.017146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31904 len:8 PRP1 0x0 PRP2 0x0 00:26:51.028 [2024-07-15 09:35:27.017153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.028 [2024-07-15 09:35:27.017166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.028 [2024-07-15 09:35:27.017172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31912 len:8 PRP1 0x0 PRP2 0x0 00:26:51.028 [2024-07-15 09:35:27.017180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.028 [2024-07-15 09:35:27.017193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.028 [2024-07-15 09:35:27.017199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31920 len:8 PRP1 0x0 PRP2 0x0 00:26:51.028 [2024-07-15 09:35:27.017208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.028 [2024-07-15 09:35:27.017221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.028 [2024-07-15 09:35:27.017227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31928 len:8 PRP1 0x0 PRP2 0x0 00:26:51.028 [2024-07-15 09:35:27.017234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.028 [2024-07-15 09:35:27.017247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.028 [2024-07-15 09:35:27.017253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31936 len:8 PRP1 0x0 PRP2 0x0 00:26:51.028 [2024-07-15 09:35:27.017261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.028 [2024-07-15 09:35:27.017274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.028 [2024-07-15 09:35:27.017279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31944 len:8 PRP1 0x0 PRP2 0x0 00:26:51.028 [2024-07-15 09:35:27.017286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.028 [2024-07-15 09:35:27.017299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.028 [2024-07-15 09:35:27.017305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31952 len:8 PRP1 0x0 PRP2 0x0 00:26:51.028 [2024-07-15 09:35:27.017312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.028 [2024-07-15 09:35:27.017327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.028 [2024-07-15 09:35:27.017334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31960 len:8 PRP1 0x0 PRP2 0x0 00:26:51.028 [2024-07-15 09:35:27.017341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.028 [2024-07-15 09:35:27.017353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.028 [2024-07-15 09:35:27.017359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31968 len:8 PRP1 0x0 PRP2 0x0 00:26:51.028 [2024-07-15 09:35:27.017366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.028 [2024-07-15 09:35:27.017379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.028 [2024-07-15 09:35:27.017385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31976 len:8 PRP1 0x0 PRP2 0x0 00:26:51.028 [2024-07-15 09:35:27.017392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.028 [2024-07-15 09:35:27.017405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.028 [2024-07-15 09:35:27.017412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31984 len:8 PRP1 0x0 PRP2 0x0 00:26:51.028 [2024-07-15 09:35:27.017418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.028 [2024-07-15 09:35:27.017431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.028 [2024-07-15 09:35:27.017437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31992 len:8 PRP1 0x0 PRP2 0x0 00:26:51.028 [2024-07-15 09:35:27.017443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.028 [2024-07-15 09:35:27.017456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.028 [2024-07-15 09:35:27.017462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32000 len:8 PRP1 0x0 PRP2 0x0 00:26:51.028 [2024-07-15 09:35:27.017469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.028 [2024-07-15 09:35:27.017482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.028 [2024-07-15 09:35:27.017487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32008 len:8 PRP1 0x0 PRP2 0x0 00:26:51.028 [2024-07-15 09:35:27.017494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.028 [2024-07-15 09:35:27.017509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.028 [2024-07-15 09:35:27.017515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32016 len:8 PRP1 0x0 PRP2 0x0 00:26:51.028 [2024-07-15 09:35:27.017522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.028 [2024-07-15 09:35:27.017536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.028 [2024-07-15 09:35:27.017542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32024 len:8 PRP1 0x0 PRP2 0x0 00:26:51.028 [2024-07-15 09:35:27.017550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017588] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c467c0 was disconnected and freed. reset controller. 00:26:51.028 [2024-07-15 09:35:27.017597] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:51.028 [2024-07-15 09:35:27.017624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.028 [2024-07-15 09:35:27.017633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.028 [2024-07-15 09:35:27.017650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.028 [2024-07-15 09:35:27.017666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.028 [2024-07-15 09:35:27.017682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.028 [2024-07-15 09:35:27.017689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.029 [2024-07-15 09:35:27.017718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c17ea0 (9): Bad file descriptor 00:26:51.029 [2024-07-15 09:35:27.021274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.029 [2024-07-15 09:35:27.101153] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:51.029 [2024-07-15 09:35:31.347820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.347858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.347877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.347885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.347895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.347902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.347911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.347919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.347928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.347936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.347952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.347960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.347969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.347976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.347985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.347992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.029 [2024-07-15 09:35:31.348358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.029 [2024-07-15 09:35:31.348366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.030 [2024-07-15 09:35:31.348771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.348988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.348996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.349007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.349014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.349024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.030 [2024-07-15 09:35:31.349031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.030 [2024-07-15 09:35:31.349040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.031 [2024-07-15 09:35:31.349195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.031 [2024-07-15 09:35:31.349633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.031 [2024-07-15 09:35:31.349641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.032 [2024-07-15 09:35:31.349973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.349982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c545b0 is same with the state(5) to be set 00:26:51.032 [2024-07-15 09:35:31.349990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.032 [2024-07-15 09:35:31.349997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.032 [2024-07-15 09:35:31.350004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58488 len:8 PRP1 0x0 PRP2 0x0 00:26:51.032 [2024-07-15 09:35:31.350011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.350047] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c545b0 was disconnected and freed. reset controller. 00:26:51.032 [2024-07-15 09:35:31.350056] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:51.032 [2024-07-15 09:35:31.350078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.032 [2024-07-15 09:35:31.350086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.350097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.032 [2024-07-15 09:35:31.350104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.350112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.032 [2024-07-15 09:35:31.350119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.350127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.032 [2024-07-15 09:35:31.350135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.032 [2024-07-15 09:35:31.350142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.032 [2024-07-15 09:35:31.353694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.032 [2024-07-15 09:35:31.353719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c17ea0 (9): Bad file descriptor 00:26:51.032 [2024-07-15 09:35:31.508290] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:51.032 00:26:51.032 Latency(us) 00:26:51.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.032 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:51.032 Verification LBA range: start 0x0 length 0x4000 00:26:51.032 NVMe0n1 : 15.01 11614.69 45.37 639.33 0.00 10418.26 512.00 19660.80 00:26:51.032 =================================================================================================================== 00:26:51.032 Total : 11614.69 45.37 639.33 0.00 10418.26 512.00 19660.80 00:26:51.032 Received shutdown signal, test time was about 15.000000 seconds 00:26:51.032 00:26:51.032 Latency(us) 00:26:51.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.032 =================================================================================================================== 00:26:51.032 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:51.032 09:35:37 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:51.032 09:35:37 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:51.032 09:35:37 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:51.032 09:35:37 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=824825 00:26:51.032 09:35:37 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 824825 /var/tmp/bdevperf.sock 00:26:51.032 09:35:37 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:51.032 09:35:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 824825 ']' 00:26:51.032 09:35:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:51.032 09:35:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:51.032 09:35:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:51.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:51.032 09:35:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:51.032 09:35:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:51.602 09:35:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:51.602 09:35:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:26:51.603 09:35:38 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:51.603 [2024-07-15 09:35:38.671929] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:51.603 09:35:38 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:51.863 [2024-07-15 09:35:38.840445] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:51.863 09:35:38 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:52.124 NVMe0n1 00:26:52.124 09:35:39 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:52.385 00:26:52.385 09:35:39 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:52.647 00:26:52.647 09:35:39 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:52.647 09:35:39 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:52.908 09:35:39 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:52.908 09:35:40 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:56.217 09:35:43 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:56.217 09:35:43 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:56.217 09:35:43 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=825995 00:26:56.217 09:35:43 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 825995 00:26:56.217 09:35:43 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:57.162 0 00:26:57.162 09:35:44 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:57.424 [2024-07-15 09:35:37.774974] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:26:57.424 [2024-07-15 09:35:37.775032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid824825 ] 00:26:57.424 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.424 [2024-07-15 09:35:37.840479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.424 [2024-07-15 09:35:37.904146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.424 [2024-07-15 09:35:40.045303] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:57.424 [2024-07-15 09:35:40.045361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.424 [2024-07-15 09:35:40.045374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.424 [2024-07-15 09:35:40.045383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.424 [2024-07-15 09:35:40.045391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.424 [2024-07-15 09:35:40.045399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.424 [2024-07-15 09:35:40.045406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.424 [2024-07-15 09:35:40.045414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.424 [2024-07-15 09:35:40.045421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.424 [2024-07-15 09:35:40.045429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.424 [2024-07-15 09:35:40.045459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.424 [2024-07-15 09:35:40.045476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bdea0 (9): Bad file descriptor 00:26:57.424 [2024-07-15 09:35:40.137502] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:57.424 Running I/O for 1 seconds... 00:26:57.424 00:26:57.424 Latency(us) 00:26:57.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.424 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:57.424 Verification LBA range: start 0x0 length 0x4000 00:26:57.424 NVMe0n1 : 1.01 11406.58 44.56 0.00 0.00 11169.48 2443.95 11086.51 00:26:57.424 =================================================================================================================== 00:26:57.424 Total : 11406.58 44.56 0.00 0.00 11169.48 2443.95 11086.51 00:26:57.424 09:35:44 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:57.424 09:35:44 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:57.424 09:35:44 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:57.686 09:35:44 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:57.686 09:35:44 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:57.686 09:35:44 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:57.947 09:35:45 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:01.253 09:35:48 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:01.253 09:35:48 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:01.253 09:35:48 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 824825 00:27:01.253 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 824825 ']' 00:27:01.253 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 824825 00:27:01.253 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:27:01.253 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:01.253 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 824825 00:27:01.253 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:01.253 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:01.254 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 824825' 00:27:01.254 killing process with pid 824825 00:27:01.254 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 824825 00:27:01.254 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 824825 00:27:01.254 09:35:48 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:01.254 09:35:48 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:01.514 09:35:48 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:01.514 09:35:48 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:01.514 09:35:48 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:01.514 09:35:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:01.514 09:35:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:27:01.514 09:35:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:01.514 09:35:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:27:01.514 09:35:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:01.514 09:35:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:01.514 rmmod nvme_tcp 00:27:01.514 rmmod nvme_fabrics 00:27:01.514 rmmod nvme_keyring 00:27:01.514 09:35:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:01.514 09:35:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:27:01.514 09:35:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:27:01.514 09:35:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 821273 ']' 00:27:01.514 09:35:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 821273 00:27:01.514 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 821273 ']' 00:27:01.514 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 821273 00:27:01.514 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:27:01.514 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:01.514 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 821273 00:27:01.775 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:01.775 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:01.776 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 821273' 00:27:01.776 killing process with pid 821273 00:27:01.776 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 821273 00:27:01.776 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 821273 00:27:01.776 09:35:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:01.776 09:35:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:01.776 09:35:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:01.776 09:35:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:01.776 09:35:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:01.776 09:35:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.776 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:01.776 09:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.326 09:35:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:04.326 00:27:04.327 real 0m40.345s 00:27:04.327 user 2m1.693s 00:27:04.327 sys 0m8.860s 00:27:04.327 09:35:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:04.327 09:35:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:04.327 ************************************ 00:27:04.327 END TEST nvmf_failover 00:27:04.327 ************************************ 00:27:04.327 09:35:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:04.327 09:35:50 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:04.327 09:35:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:04.327 09:35:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:04.327 09:35:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:04.327 ************************************ 00:27:04.327 START TEST nvmf_host_discovery 00:27:04.327 ************************************ 00:27:04.327 09:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:04.327 * Looking for test storage... 00:27:04.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:27:04.327 09:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:12.471 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:12.471 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:12.472 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:12.472 Found net devices under 0000:31:00.0: cvl_0_0 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:12.472 Found net devices under 0000:31:00.1: cvl_0_1 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:12.472 09:35:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:12.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:12.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:27:12.472 00:27:12.472 --- 10.0.0.2 ping statistics --- 00:27:12.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.472 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:12.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:12.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:27:12.472 00:27:12.472 --- 10.0.0.1 ping statistics --- 00:27:12.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.472 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=831604 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 831604 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 831604 ']' 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:12.472 09:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.472 [2024-07-15 09:35:59.241539] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:27:12.472 [2024-07-15 09:35:59.241605] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.472 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.472 [2024-07-15 09:35:59.336096] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.472 [2024-07-15 09:35:59.429836] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:12.472 [2024-07-15 09:35:59.429893] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:12.472 [2024-07-15 09:35:59.429902] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:12.472 [2024-07-15 09:35:59.429910] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:12.472 [2024-07-15 09:35:59.429916] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:12.472 [2024-07-15 09:35:59.429940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.045 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:13.045 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:27:13.045 09:36:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:13.045 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:13.045 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.045 09:36:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:13.045 09:36:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:13.045 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.045 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.045 [2024-07-15 09:36:00.082422] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:13.045 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.045 09:36:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:13.045 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.045 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.046 [2024-07-15 09:36:00.090631] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.046 null0 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.046 null1 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=831791 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 831791 /tmp/host.sock 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 831791 ']' 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:13.046 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:13.046 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.046 [2024-07-15 09:36:00.169039] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:27:13.046 [2024-07-15 09:36:00.169110] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid831791 ] 00:27:13.046 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.046 [2024-07-15 09:36:00.242804] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.306 [2024-07-15 09:36:00.316899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.880 09:36:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.880 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:13.880 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:27:13.880 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:13.880 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:13.880 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.880 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:13.880 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.880 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:13.880 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.880 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:13.880 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:13.880 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.880 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.880 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.158 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:27:14.158 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:14.158 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:14.158 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:14.158 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.158 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.159 [2024-07-15 09:36:01.269585] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:14.159 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:27:14.419 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:14.420 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:14.420 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.420 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:14.420 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.420 09:36:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:14.420 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.420 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:27:14.420 09:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:27:14.988 [2024-07-15 09:36:02.012982] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:14.988 [2024-07-15 09:36:02.013008] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:14.988 [2024-07-15 09:36:02.013023] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:14.988 [2024-07-15 09:36:02.101305] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:15.248 [2024-07-15 09:36:02.285133] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:15.248 [2024-07-15 09:36:02.285158] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.508 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:15.509 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.770 [2024-07-15 09:36:02.809718] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:15.770 [2024-07-15 09:36:02.810449] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:15.770 [2024-07-15 09:36:02.810474] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:15.770 [2024-07-15 09:36:02.939300] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:15.770 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.029 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:16.029 09:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:27:16.029 [2024-07-15 09:36:03.210648] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:16.029 [2024-07-15 09:36:03.210667] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:16.029 [2024-07-15 09:36:03.210673] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:17.024 09:36:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:17.024 09:36:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:17.024 09:36:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:27:17.024 09:36:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:17.024 09:36:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:17.024 09:36:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.024 09:36:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:17.024 09:36:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.024 09:36:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:17.024 09:36:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.024 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.024 [2024-07-15 09:36:04.086210] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:17.024 [2024-07-15 09:36:04.086232] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:17.024 [2024-07-15 09:36:04.086709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.024 [2024-07-15 09:36:04.086727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.024 [2024-07-15 09:36:04.086736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.025 [2024-07-15 09:36:04.086747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.025 [2024-07-15 09:36:04.086759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.025 [2024-07-15 09:36:04.086767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.025 [2024-07-15 09:36:04.086774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.025 [2024-07-15 09:36:04.086781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.025 [2024-07-15 09:36:04.086789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71c9a0 is same with the state(5) to be set 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:17.025 [2024-07-15 09:36:04.096720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71c9a0 (9): Bad file descriptor 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:17.025 [2024-07-15 09:36:04.106762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:17.025 [2024-07-15 09:36:04.107235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-15 09:36:04.107273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x71c9a0 with addr=10.0.0.2, port=4420 00:27:17.025 [2024-07-15 09:36:04.107284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71c9a0 is same with the state(5) to be set 00:27:17.025 [2024-07-15 09:36:04.107303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71c9a0 (9): Bad file descriptor 00:27:17.025 [2024-07-15 09:36:04.107331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:17.025 [2024-07-15 09:36:04.107339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:17.025 [2024-07-15 09:36:04.107347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:17.025 [2024-07-15 09:36:04.107363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.025 [2024-07-15 09:36:04.116820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:17.025 [2024-07-15 09:36:04.117170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-15 09:36:04.117207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x71c9a0 with addr=10.0.0.2, port=4420 00:27:17.025 [2024-07-15 09:36:04.117222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71c9a0 is same with the state(5) to be set 00:27:17.025 [2024-07-15 09:36:04.117241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71c9a0 (9): Bad file descriptor 00:27:17.025 [2024-07-15 09:36:04.117253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:17.025 [2024-07-15 09:36:04.117259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:17.025 [2024-07-15 09:36:04.117267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:17.025 [2024-07-15 09:36:04.117282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.025 [2024-07-15 09:36:04.126875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:17.025 [2024-07-15 09:36:04.127119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-15 09:36:04.127136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x71c9a0 with addr=10.0.0.2, port=4420 00:27:17.025 [2024-07-15 09:36:04.127144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71c9a0 is same with the state(5) to be set 00:27:17.025 [2024-07-15 09:36:04.127156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71c9a0 (9): Bad file descriptor 00:27:17.025 [2024-07-15 09:36:04.127166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:17.025 [2024-07-15 09:36:04.127172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:17.025 [2024-07-15 09:36:04.127179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:17.025 [2024-07-15 09:36:04.127190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.025 [2024-07-15 09:36:04.136938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:17.025 [2024-07-15 09:36:04.137219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-15 09:36:04.137233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x71c9a0 with addr=10.0.0.2, port=4420 00:27:17.025 [2024-07-15 09:36:04.137243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71c9a0 is same with the state(5) to be set 00:27:17.025 [2024-07-15 09:36:04.137254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71c9a0 (9): Bad file descriptor 00:27:17.025 [2024-07-15 09:36:04.137264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:17.025 [2024-07-15 09:36:04.137271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:17.025 [2024-07-15 09:36:04.137278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:17.025 [2024-07-15 09:36:04.137288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:17.025 [2024-07-15 09:36:04.146996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:17.025 [2024-07-15 09:36:04.147248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-15 09:36:04.147261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x71c9a0 with addr=10.0.0.2, port=4420 00:27:17.025 [2024-07-15 09:36:04.147268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71c9a0 is same with the state(5) to be set 00:27:17.025 [2024-07-15 09:36:04.147281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71c9a0 (9): Bad file descriptor 00:27:17.025 [2024-07-15 09:36:04.147293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:17.025 [2024-07-15 09:36:04.147299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:17.025 [2024-07-15 09:36:04.147306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:17.025 [2024-07-15 09:36:04.147316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:17.025 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:17.025 [2024-07-15 09:36:04.157048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:17.025 [2024-07-15 09:36:04.157414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-15 09:36:04.157427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x71c9a0 with addr=10.0.0.2, port=4420 00:27:17.025 [2024-07-15 09:36:04.157435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71c9a0 is same with the state(5) to be set 00:27:17.025 [2024-07-15 09:36:04.157446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71c9a0 (9): Bad file descriptor 00:27:17.026 [2024-07-15 09:36:04.157456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:17.026 [2024-07-15 09:36:04.157462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:17.026 [2024-07-15 09:36:04.157469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:17.026 [2024-07-15 09:36:04.157480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.026 [2024-07-15 09:36:04.167103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:17.026 [2024-07-15 09:36:04.167295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-15 09:36:04.167307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x71c9a0 with addr=10.0.0.2, port=4420 00:27:17.026 [2024-07-15 09:36:04.167314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71c9a0 is same with the state(5) to be set 00:27:17.026 [2024-07-15 09:36:04.167324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71c9a0 (9): Bad file descriptor 00:27:17.026 [2024-07-15 09:36:04.167334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:17.026 [2024-07-15 09:36:04.167340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:17.026 [2024-07-15 09:36:04.167347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:17.026 [2024-07-15 09:36:04.167362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.026 [2024-07-15 09:36:04.172870] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:17.026 [2024-07-15 09:36:04.172888] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:17.026 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.026 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:17.026 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:17.026 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:17.026 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:17.026 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:17.026 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:17.026 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:17.026 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:27:17.026 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:17.026 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:17.026 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.026 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.026 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:17.026 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:17.026 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.287 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:27:17.287 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:17.287 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:17.287 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:17.287 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:17.287 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.288 09:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.675 [2024-07-15 09:36:05.519953] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:18.675 [2024-07-15 09:36:05.519969] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:18.675 [2024-07-15 09:36:05.519981] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:18.675 [2024-07-15 09:36:05.606249] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:18.937 [2024-07-15 09:36:05.878740] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:18.937 [2024-07-15 09:36:05.878779] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:18.937 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.937 09:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:18.937 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:18.937 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:18.937 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:18.937 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:18.937 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:18.937 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:18.937 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:18.937 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.938 request: 00:27:18.938 { 00:27:18.938 "name": "nvme", 00:27:18.938 "trtype": "tcp", 00:27:18.938 "traddr": "10.0.0.2", 00:27:18.938 "adrfam": "ipv4", 00:27:18.938 "trsvcid": "8009", 00:27:18.938 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:18.938 "wait_for_attach": true, 00:27:18.938 "method": "bdev_nvme_start_discovery", 00:27:18.938 "req_id": 1 00:27:18.938 } 00:27:18.938 Got JSON-RPC error response 00:27:18.938 response: 00:27:18.938 { 00:27:18.938 "code": -17, 00:27:18.938 "message": "File exists" 00:27:18.938 } 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:18.938 09:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.938 request: 00:27:18.938 { 00:27:18.938 "name": "nvme_second", 00:27:18.938 "trtype": "tcp", 00:27:18.938 "traddr": "10.0.0.2", 00:27:18.938 "adrfam": "ipv4", 00:27:18.938 "trsvcid": "8009", 00:27:18.938 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:18.938 "wait_for_attach": true, 00:27:18.938 "method": "bdev_nvme_start_discovery", 00:27:18.938 "req_id": 1 00:27:18.938 } 00:27:18.938 Got JSON-RPC error response 00:27:18.938 response: 00:27:18.938 { 00:27:18.938 "code": -17, 00:27:18.938 "message": "File exists" 00:27:18.938 } 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.938 09:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.324 [2024-07-15 09:36:07.135485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.324 [2024-07-15 09:36:07.135513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75ad80 with addr=10.0.0.2, port=8010 00:27:20.324 [2024-07-15 09:36:07.135528] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:20.324 [2024-07-15 09:36:07.135536] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:20.324 [2024-07-15 09:36:07.135543] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:21.266 [2024-07-15 09:36:08.137687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-07-15 09:36:08.137709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75ad80 with addr=10.0.0.2, port=8010 00:27:21.266 [2024-07-15 09:36:08.137721] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:21.266 [2024-07-15 09:36:08.137727] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:21.266 [2024-07-15 09:36:08.137734] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:22.283 [2024-07-15 09:36:09.139808] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:22.283 request: 00:27:22.283 { 00:27:22.283 "name": "nvme_second", 00:27:22.283 "trtype": "tcp", 00:27:22.283 "traddr": "10.0.0.2", 00:27:22.283 "adrfam": "ipv4", 00:27:22.283 "trsvcid": "8010", 00:27:22.283 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:22.283 "wait_for_attach": false, 00:27:22.283 "attach_timeout_ms": 3000, 00:27:22.283 "method": "bdev_nvme_start_discovery", 00:27:22.283 "req_id": 1 00:27:22.283 } 00:27:22.283 Got JSON-RPC error response 00:27:22.283 response: 00:27:22.283 { 00:27:22.283 "code": -110, 00:27:22.283 "message": "Connection timed out" 00:27:22.283 } 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 831791 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:22.283 rmmod nvme_tcp 00:27:22.283 rmmod nvme_fabrics 00:27:22.283 rmmod nvme_keyring 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 831604 ']' 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 831604 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 831604 ']' 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 831604 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 831604 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 831604' 00:27:22.283 killing process with pid 831604 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 831604 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 831604 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:22.283 09:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:24.831 00:27:24.831 real 0m20.506s 00:27:24.831 user 0m23.236s 00:27:24.831 sys 0m7.427s 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.831 ************************************ 00:27:24.831 END TEST nvmf_host_discovery 00:27:24.831 ************************************ 00:27:24.831 09:36:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:24.831 09:36:11 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:24.831 09:36:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:24.831 09:36:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:24.831 09:36:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:24.831 ************************************ 00:27:24.831 START TEST nvmf_host_multipath_status 00:27:24.831 ************************************ 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:24.831 * Looking for test storage... 00:27:24.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.831 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:27:24.832 09:36:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:32.982 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:32.982 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:32.982 Found net devices under 0000:31:00.0: cvl_0_0 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:32.982 Found net devices under 0000:31:00.1: cvl_0_1 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:32.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:27:32.982 00:27:32.982 --- 10.0.0.2 ping statistics --- 00:27:32.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.982 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:32.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:32.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:27:32.982 00:27:32.982 --- 10.0.0.1 ping statistics --- 00:27:32.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.982 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=838862 00:27:32.982 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 838862 00:27:32.983 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:32.983 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 838862 ']' 00:27:32.983 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.983 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:32.983 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.983 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:32.983 09:36:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:32.983 [2024-07-15 09:36:19.807801] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:27:32.983 [2024-07-15 09:36:19.807870] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.983 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.983 [2024-07-15 09:36:19.886345] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:32.983 [2024-07-15 09:36:19.959987] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.983 [2024-07-15 09:36:19.960023] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.983 [2024-07-15 09:36:19.960031] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.983 [2024-07-15 09:36:19.960037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.983 [2024-07-15 09:36:19.960043] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.983 [2024-07-15 09:36:19.960178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.983 [2024-07-15 09:36:19.960179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.554 09:36:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:33.554 09:36:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:27:33.554 09:36:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:33.554 09:36:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:33.554 09:36:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:33.554 09:36:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.554 09:36:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=838862 00:27:33.554 09:36:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:33.554 [2024-07-15 09:36:20.736650] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.554 09:36:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:33.814 Malloc0 00:27:33.814 09:36:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:34.074 09:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:34.074 09:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.335 [2024-07-15 09:36:21.367433] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.335 09:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:34.335 [2024-07-15 09:36:21.507763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:34.335 09:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=839226 00:27:34.335 09:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:34.335 09:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:34.336 09:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 839226 /var/tmp/bdevperf.sock 00:27:34.336 09:36:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 839226 ']' 00:27:34.336 09:36:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:34.336 09:36:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:34.336 09:36:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:34.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:34.336 09:36:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:34.336 09:36:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:35.276 09:36:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:35.276 09:36:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:27:35.276 09:36:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:35.536 09:36:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:35.797 Nvme0n1 00:27:35.797 09:36:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:36.057 Nvme0n1 00:27:36.057 09:36:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:36.057 09:36:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:38.602 09:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:38.602 09:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:38.602 09:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:38.602 09:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:39.544 09:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:39.544 09:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:39.544 09:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:39.544 09:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:39.544 09:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:39.544 09:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:39.545 09:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:39.545 09:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:39.804 09:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:39.804 09:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:39.804 09:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:39.804 09:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:40.064 09:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.064 09:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:40.064 09:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.064 09:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:40.064 09:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.064 09:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:40.064 09:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.064 09:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:40.325 09:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.325 09:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:40.325 09:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.325 09:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:40.325 09:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.325 09:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:40.325 09:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:40.586 09:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:40.846 09:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:41.787 09:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:41.787 09:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:41.787 09:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:41.787 09:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:42.048 09:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:42.048 09:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:42.048 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.048 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:42.048 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:42.048 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:42.048 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.048 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:42.309 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:42.309 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:42.309 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.309 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:42.309 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:42.310 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:42.310 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.310 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:42.570 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:42.570 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:42.570 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.570 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:42.831 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:42.831 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:42.831 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:42.831 09:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:43.091 09:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:44.031 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:44.031 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:44.031 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:44.031 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:44.293 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:44.293 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:44.293 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:44.293 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:44.555 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:44.555 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:44.555 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:44.555 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:44.555 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:44.555 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:44.555 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:44.555 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:44.816 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:44.816 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:44.816 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:44.816 09:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:44.816 09:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:44.816 09:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:44.816 09:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:44.816 09:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:45.077 09:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.078 09:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:45.078 09:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:45.339 09:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:45.339 09:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:46.726 09:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:46.726 09:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:46.726 09:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:46.726 09:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:46.726 09:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:46.726 09:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:46.726 09:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:46.726 09:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:46.726 09:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:46.726 09:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:46.726 09:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:46.726 09:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:46.987 09:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:46.987 09:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:46.987 09:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:46.987 09:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:47.247 09:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:47.247 09:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:47.247 09:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:47.247 09:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:47.247 09:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:47.247 09:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:47.247 09:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:47.247 09:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:47.509 09:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:47.509 09:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:47.509 09:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:47.509 09:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:47.770 09:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:48.713 09:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:48.713 09:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:48.713 09:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.713 09:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:48.973 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:48.973 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:48.973 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.973 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:49.234 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:49.234 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:49.234 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:49.234 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:49.234 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:49.234 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:49.234 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:49.234 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:49.494 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:49.494 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:49.494 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:49.494 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:49.754 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:49.755 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:49.755 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:49.755 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:49.755 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:49.755 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:49.755 09:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:50.015 09:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:50.275 09:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:51.218 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:51.218 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:51.218 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.218 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:51.218 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:51.218 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:51.218 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.218 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:51.479 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:51.479 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:51.479 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.479 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:51.740 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:51.740 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:51.740 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:51.740 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.740 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:51.740 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:51.740 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.740 09:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:52.001 09:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:52.001 09:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:52.002 09:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:52.002 09:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:52.263 09:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:52.263 09:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:52.263 09:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:52.263 09:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:52.525 09:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:52.786 09:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:53.729 09:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:53.729 09:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:53.729 09:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.729 09:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:53.729 09:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.729 09:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:53.990 09:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.990 09:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:53.990 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.990 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:53.990 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.990 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:54.250 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:54.250 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:54.250 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.250 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:54.250 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:54.250 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:54.250 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.250 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:54.512 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:54.512 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:54.512 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:54.512 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.773 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:54.773 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:54.773 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:54.773 09:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:55.033 09:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:55.977 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:55.977 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:55.977 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.977 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:56.237 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:56.237 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:56.237 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.237 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:56.497 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.497 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:56.497 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.498 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:56.498 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.498 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:56.498 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.498 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:56.758 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.758 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:56.758 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.758 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:56.758 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.758 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:57.018 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.018 09:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:57.018 09:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:57.018 09:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:57.018 09:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:57.278 09:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:57.278 09:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:58.662 09:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:58.662 09:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:58.662 09:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.662 09:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:58.662 09:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.662 09:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:58.662 09:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.662 09:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:58.662 09:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.662 09:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:58.662 09:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.662 09:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:58.921 09:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.921 09:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:58.921 09:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:58.922 09:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.181 09:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.181 09:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:59.181 09:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.181 09:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:59.181 09:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.181 09:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:59.181 09:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.181 09:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:59.442 09:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.442 09:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:59.442 09:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:59.701 09:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:59.701 09:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:00.641 09:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:00.641 09:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:00.641 09:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:00.641 09:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.903 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.903 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:00.903 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.903 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:01.164 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:01.164 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:01.164 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.164 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:01.164 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.164 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:01.165 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.165 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:01.426 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.426 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:01.426 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.426 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:01.688 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.688 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:01.688 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.688 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:01.688 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:01.688 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 839226 00:28:01.688 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 839226 ']' 00:28:01.688 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 839226 00:28:01.688 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:28:01.688 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:01.688 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 839226 00:28:01.952 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:28:01.952 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:28:01.952 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 839226' 00:28:01.952 killing process with pid 839226 00:28:01.952 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 839226 00:28:01.952 09:36:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 839226 00:28:01.952 Connection closed with partial response: 00:28:01.952 00:28:01.952 00:28:01.952 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 839226 00:28:01.952 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:01.952 [2024-07-15 09:36:21.578376] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:28:01.952 [2024-07-15 09:36:21.578448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid839226 ] 00:28:01.952 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.952 [2024-07-15 09:36:21.635515] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.952 [2024-07-15 09:36:21.687567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:01.952 Running I/O for 90 seconds... 00:28:01.952 [2024-07-15 09:36:34.680319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.952 [2024-07-15 09:36:34.680393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.952 [2024-07-15 09:36:34.680733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:01.952 [2024-07-15 09:36:34.680745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.680755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.680766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.680771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:01.953 [2024-07-15 09:36:34.681794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.953 [2024-07-15 09:36:34.681800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.681814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.681819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.681833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.681838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.681851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.681857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.681871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.681876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.681889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.681894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.681908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.681914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.681927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.681932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.681947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.681953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.681997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.954 [2024-07-15 09:36:34.682365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.954 [2024-07-15 09:36:34.682386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.954 [2024-07-15 09:36:34.682406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.954 [2024-07-15 09:36:34.682427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.954 [2024-07-15 09:36:34.682447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.954 [2024-07-15 09:36:34.682467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.954 [2024-07-15 09:36:34.682488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:01.954 [2024-07-15 09:36:34.682960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.954 [2024-07-15 09:36:34.682966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.682982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.955 [2024-07-15 09:36:34.682986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.955 [2024-07-15 09:36:34.683010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.955 [2024-07-15 09:36:34.683032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.955 [2024-07-15 09:36:34.683054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.955 [2024-07-15 09:36:34.683076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.955 [2024-07-15 09:36:34.683377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.955 [2024-07-15 09:36:34.683399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.955 [2024-07-15 09:36:34.683421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.955 [2024-07-15 09:36:34.683443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.955 [2024-07-15 09:36:34.683465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.955 [2024-07-15 09:36:34.683488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.955 [2024-07-15 09:36:34.683510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.955 [2024-07-15 09:36:34.683533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.955 [2024-07-15 09:36:34.683555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.955 [2024-07-15 09:36:34.683579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.955 [2024-07-15 09:36:34.683602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.955 [2024-07-15 09:36:34.683624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.955 [2024-07-15 09:36:34.683647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.955 [2024-07-15 09:36:34.683668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.955 [2024-07-15 09:36:34.683691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:34.683708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.955 [2024-07-15 09:36:34.683713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:46.804168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.955 [2024-07-15 09:36:46.804203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:46.804239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.955 [2024-07-15 09:36:46.804246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:46.804304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.955 [2024-07-15 09:36:46.804311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:46.804323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.955 [2024-07-15 09:36:46.804329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:46.805289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.955 [2024-07-15 09:36:46.805304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:46.805321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.955 [2024-07-15 09:36:46.805327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:46.805337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.955 [2024-07-15 09:36:46.805344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:46.805354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.955 [2024-07-15 09:36:46.805359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:01.955 [2024-07-15 09:36:46.805370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.955 [2024-07-15 09:36:46.805375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:01.955 Received shutdown signal, test time was about 25.607587 seconds 00:28:01.955 00:28:01.955 Latency(us) 00:28:01.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.955 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:01.955 Verification LBA range: start 0x0 length 0x4000 00:28:01.955 Nvme0n1 : 25.61 10897.30 42.57 0.00 0.00 11726.76 370.35 3019898.88 00:28:01.955 =================================================================================================================== 00:28:01.955 Total : 10897.30 42.57 0.00 0.00 11726.76 370.35 3019898.88 00:28:01.955 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:02.220 rmmod nvme_tcp 00:28:02.220 rmmod nvme_fabrics 00:28:02.220 rmmod nvme_keyring 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 838862 ']' 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 838862 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 838862 ']' 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 838862 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 838862 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 838862' 00:28:02.220 killing process with pid 838862 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 838862 00:28:02.220 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 838862 00:28:02.494 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:02.494 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:02.494 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:02.494 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:02.494 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:02.494 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.494 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:02.494 09:36:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.477 09:36:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:04.477 00:28:04.477 real 0m39.971s 00:28:04.477 user 1m41.147s 00:28:04.477 sys 0m11.304s 00:28:04.477 09:36:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:04.477 09:36:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:04.477 ************************************ 00:28:04.477 END TEST nvmf_host_multipath_status 00:28:04.477 ************************************ 00:28:04.477 09:36:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:04.477 09:36:51 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:04.477 09:36:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:04.477 09:36:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:04.477 09:36:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:04.477 ************************************ 00:28:04.477 START TEST nvmf_discovery_remove_ifc 00:28:04.477 ************************************ 00:28:04.477 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:04.763 * Looking for test storage... 00:28:04.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:04.763 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:04.764 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:04.764 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:04.764 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:04.764 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:04.764 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:04.764 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.764 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:04.764 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:04.764 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:04.764 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.764 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:04.764 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.764 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:04.764 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:04.764 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:28:04.764 09:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:12.902 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:12.902 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:12.902 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:12.903 Found net devices under 0000:31:00.0: cvl_0_0 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:12.903 Found net devices under 0000:31:00.1: cvl_0_1 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:12.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:28:12.903 00:28:12.903 --- 10.0.0.2 ping statistics --- 00:28:12.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.903 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:12.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:28:12.903 00:28:12.903 --- 10.0.0.1 ping statistics --- 00:28:12.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.903 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=849446 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 849446 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 849446 ']' 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:12.903 09:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:12.903 [2024-07-15 09:37:00.034652] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:28:12.903 [2024-07-15 09:37:00.034715] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.903 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.165 [2024-07-15 09:37:00.128391] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.165 [2024-07-15 09:37:00.220758] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.165 [2024-07-15 09:37:00.220815] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.165 [2024-07-15 09:37:00.220823] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:13.165 [2024-07-15 09:37:00.220830] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:13.165 [2024-07-15 09:37:00.220836] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.165 [2024-07-15 09:37:00.220868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.738 09:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:13.738 09:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:28:13.738 09:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:13.738 09:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:13.738 09:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:13.738 09:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:13.738 09:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:13.738 09:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.738 09:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:13.738 [2024-07-15 09:37:00.868826] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.738 [2024-07-15 09:37:00.877040] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:13.738 null0 00:28:13.738 [2024-07-15 09:37:00.908992] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.738 09:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.738 09:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=849510 00:28:13.738 09:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 849510 /tmp/host.sock 00:28:13.738 09:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:13.738 09:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 849510 ']' 00:28:13.738 09:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:28:13.738 09:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:13.738 09:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:13.738 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:13.738 09:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:13.738 09:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:14.000 [2024-07-15 09:37:00.985272] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:28:14.000 [2024-07-15 09:37:00.985335] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid849510 ] 00:28:14.000 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.000 [2024-07-15 09:37:01.056952] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.000 [2024-07-15 09:37:01.132230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.572 09:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:14.572 09:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:28:14.572 09:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:14.572 09:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:14.572 09:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.572 09:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:14.572 09:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.572 09:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:14.572 09:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.572 09:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:14.833 09:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.833 09:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:14.833 09:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.833 09:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:15.775 [2024-07-15 09:37:02.878963] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:15.775 [2024-07-15 09:37:02.878983] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:15.775 [2024-07-15 09:37:02.878996] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:15.775 [2024-07-15 09:37:02.966280] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:16.037 [2024-07-15 09:37:03.028592] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:16.037 [2024-07-15 09:37:03.028642] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:16.037 [2024-07-15 09:37:03.028664] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:16.037 [2024-07-15 09:37:03.028679] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:16.037 [2024-07-15 09:37:03.028699] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:16.037 [2024-07-15 09:37:03.036525] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xa73500 was disconnected and freed. delete nvme_qpair. 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:16.037 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:16.298 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.298 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:16.298 09:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:17.242 09:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:17.242 09:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.242 09:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.242 09:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:17.242 09:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:17.242 09:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:17.242 09:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:17.242 09:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.242 09:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:17.242 09:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:18.185 09:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:18.185 09:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:18.185 09:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:18.185 09:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.185 09:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:18.185 09:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:18.185 09:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:18.185 09:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.447 09:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:18.447 09:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:19.391 09:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:19.391 09:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:19.391 09:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:19.391 09:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:19.391 09:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.391 09:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:19.391 09:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:19.391 09:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.391 09:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:19.391 09:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:20.332 09:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:20.332 09:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:20.332 09:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:20.332 09:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.332 09:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:20.332 09:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:20.332 09:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:20.332 09:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.332 09:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:20.332 09:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:21.273 [2024-07-15 09:37:08.469161] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:21.273 [2024-07-15 09:37:08.469211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:21.273 [2024-07-15 09:37:08.469223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.273 [2024-07-15 09:37:08.469234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:21.273 [2024-07-15 09:37:08.469241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.273 [2024-07-15 09:37:08.469249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:21.273 [2024-07-15 09:37:08.469257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.273 [2024-07-15 09:37:08.469264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:21.273 [2024-07-15 09:37:08.469272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.273 [2024-07-15 09:37:08.469280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:21.273 [2024-07-15 09:37:08.469287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.273 [2024-07-15 09:37:08.469294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3a0a0 is same with the state(5) to be set 00:28:21.533 [2024-07-15 09:37:08.479180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3a0a0 (9): Bad file descriptor 00:28:21.533 [2024-07-15 09:37:08.489220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:21.533 09:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:21.533 09:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:21.533 09:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:21.533 09:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.533 09:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:21.533 09:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:21.533 09:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:22.473 [2024-07-15 09:37:09.528777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:22.473 [2024-07-15 09:37:09.528821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3a0a0 with addr=10.0.0.2, port=4420 00:28:22.473 [2024-07-15 09:37:09.528835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3a0a0 is same with the state(5) to be set 00:28:22.473 [2024-07-15 09:37:09.528860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3a0a0 (9): Bad file descriptor 00:28:22.473 [2024-07-15 09:37:09.529227] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.473 [2024-07-15 09:37:09.529247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:22.473 [2024-07-15 09:37:09.529254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:22.473 [2024-07-15 09:37:09.529264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:22.473 [2024-07-15 09:37:09.529280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.473 [2024-07-15 09:37:09.529289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:22.473 09:37:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.473 09:37:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:22.473 09:37:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:23.415 [2024-07-15 09:37:10.531674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:23.415 [2024-07-15 09:37:10.531710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:23.415 [2024-07-15 09:37:10.531719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:23.415 [2024-07-15 09:37:10.531727] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:28:23.415 [2024-07-15 09:37:10.531743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.415 [2024-07-15 09:37:10.531767] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:23.415 [2024-07-15 09:37:10.531794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.415 [2024-07-15 09:37:10.531805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.415 [2024-07-15 09:37:10.531816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.415 [2024-07-15 09:37:10.531823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.415 [2024-07-15 09:37:10.531832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.415 [2024-07-15 09:37:10.531839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.415 [2024-07-15 09:37:10.531847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.415 [2024-07-15 09:37:10.531854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.415 [2024-07-15 09:37:10.531862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.415 [2024-07-15 09:37:10.531869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.415 [2024-07-15 09:37:10.531877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:23.415 [2024-07-15 09:37:10.532515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa39520 (9): Bad file descriptor 00:28:23.415 [2024-07-15 09:37:10.533528] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:23.415 [2024-07-15 09:37:10.533540] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:23.415 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:23.415 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:23.415 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:23.415 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.415 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:23.415 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:23.415 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:23.415 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.415 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:23.415 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:23.675 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:23.675 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:23.675 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:23.675 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:23.675 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:23.675 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:23.675 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.675 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:23.675 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:23.675 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.675 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:23.675 09:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:24.616 09:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:24.616 09:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:24.616 09:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:24.616 09:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.616 09:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:24.616 09:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:24.616 09:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:24.616 09:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.875 09:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:24.875 09:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:25.444 [2024-07-15 09:37:12.552215] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:25.444 [2024-07-15 09:37:12.552232] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:25.444 [2024-07-15 09:37:12.552244] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:25.444 [2024-07-15 09:37:12.639534] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:25.704 [2024-07-15 09:37:12.824817] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:25.704 [2024-07-15 09:37:12.824858] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:25.704 [2024-07-15 09:37:12.824880] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:25.704 [2024-07-15 09:37:12.824894] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:25.704 [2024-07-15 09:37:12.824902] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:25.704 [2024-07-15 09:37:12.830672] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xa7ccb0 was disconnected and freed. delete nvme_qpair. 00:28:25.705 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:25.705 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:25.705 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.705 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:25.705 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:25.705 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:25.705 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:25.705 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.705 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:25.705 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:25.705 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 849510 00:28:25.705 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 849510 ']' 00:28:25.705 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 849510 00:28:25.705 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:28:25.705 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:25.705 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 849510 00:28:25.967 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:25.967 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:25.967 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 849510' 00:28:25.967 killing process with pid 849510 00:28:25.967 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 849510 00:28:25.967 09:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 849510 00:28:25.967 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:25.967 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:25.967 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:28:25.967 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:25.967 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:28:25.967 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:25.967 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:25.967 rmmod nvme_tcp 00:28:25.967 rmmod nvme_fabrics 00:28:25.967 rmmod nvme_keyring 00:28:25.967 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:25.967 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:28:25.967 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:28:25.967 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 849446 ']' 00:28:25.967 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 849446 00:28:25.967 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 849446 ']' 00:28:25.967 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 849446 00:28:25.967 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:28:25.967 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:25.967 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 849446 00:28:26.301 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:26.301 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:26.301 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 849446' 00:28:26.301 killing process with pid 849446 00:28:26.301 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 849446 00:28:26.301 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 849446 00:28:26.301 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:26.301 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:26.301 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:26.301 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:26.301 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:26.301 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.301 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:26.301 09:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.211 09:37:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:28.211 00:28:28.211 real 0m23.748s 00:28:28.211 user 0m27.192s 00:28:28.211 sys 0m7.348s 00:28:28.211 09:37:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:28.211 09:37:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:28.211 ************************************ 00:28:28.211 END TEST nvmf_discovery_remove_ifc 00:28:28.211 ************************************ 00:28:28.471 09:37:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:28.471 09:37:15 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:28.471 09:37:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:28.471 09:37:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:28.471 09:37:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:28.471 ************************************ 00:28:28.471 START TEST nvmf_identify_kernel_target 00:28:28.471 ************************************ 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:28.471 * Looking for test storage... 00:28:28.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.471 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:28:28.472 09:37:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:36.610 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:36.610 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:36.610 Found net devices under 0000:31:00.0: cvl_0_0 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:36.610 Found net devices under 0000:31:00.1: cvl_0_1 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:36.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:28:36.610 00:28:36.610 --- 10.0.0.2 ping statistics --- 00:28:36.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.610 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:28:36.610 00:28:36.610 --- 10.0.0.1 ping statistics --- 00:28:36.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.610 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:36.610 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:36.611 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.611 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.611 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:36.611 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.611 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:36.611 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:36.611 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:36.611 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:36.611 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:36.611 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:36.611 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:36.611 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:36.611 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:36.611 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:36.611 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:28:36.611 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:36.611 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:36.611 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:36.611 09:37:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:40.817 Waiting for block devices as requested 00:28:40.817 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:40.817 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:40.817 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:40.817 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:40.817 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:40.817 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:40.817 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:41.077 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:41.077 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:41.077 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:41.337 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:41.337 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:41.337 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:41.598 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:41.598 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:41.598 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:41.598 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:41.598 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:41.598 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:41.598 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:41.598 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:41.598 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:41.598 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:41.598 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:41.598 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:41.598 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:41.860 No valid GPT data, bailing 00:28:41.861 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:41.861 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:28:41.861 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:28:41.861 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:41.861 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:41.861 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:41.861 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:41.861 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:41.861 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:41.861 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:28:41.861 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:41.861 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:28:41.861 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:41.861 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:28:41.861 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:28:41.861 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:28:41.861 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:41.861 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:28:41.861 00:28:41.861 Discovery Log Number of Records 2, Generation counter 2 00:28:41.861 =====Discovery Log Entry 0====== 00:28:41.861 trtype: tcp 00:28:41.861 adrfam: ipv4 00:28:41.861 subtype: current discovery subsystem 00:28:41.861 treq: not specified, sq flow control disable supported 00:28:41.861 portid: 1 00:28:41.861 trsvcid: 4420 00:28:41.861 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:41.861 traddr: 10.0.0.1 00:28:41.861 eflags: none 00:28:41.861 sectype: none 00:28:41.861 =====Discovery Log Entry 1====== 00:28:41.861 trtype: tcp 00:28:41.861 adrfam: ipv4 00:28:41.861 subtype: nvme subsystem 00:28:41.861 treq: not specified, sq flow control disable supported 00:28:41.861 portid: 1 00:28:41.861 trsvcid: 4420 00:28:41.861 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:41.861 traddr: 10.0.0.1 00:28:41.861 eflags: none 00:28:41.861 sectype: none 00:28:41.861 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:41.861 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:41.861 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.861 ===================================================== 00:28:41.861 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:41.861 ===================================================== 00:28:41.861 Controller Capabilities/Features 00:28:41.861 ================================ 00:28:41.861 Vendor ID: 0000 00:28:41.861 Subsystem Vendor ID: 0000 00:28:41.861 Serial Number: 185af1a0fb31708198f2 00:28:41.861 Model Number: Linux 00:28:41.861 Firmware Version: 6.7.0-68 00:28:41.861 Recommended Arb Burst: 0 00:28:41.861 IEEE OUI Identifier: 00 00 00 00:28:41.861 Multi-path I/O 00:28:41.861 May have multiple subsystem ports: No 00:28:41.861 May have multiple controllers: No 00:28:41.861 Associated with SR-IOV VF: No 00:28:41.861 Max Data Transfer Size: Unlimited 00:28:41.861 Max Number of Namespaces: 0 00:28:41.861 Max Number of I/O Queues: 1024 00:28:41.861 NVMe Specification Version (VS): 1.3 00:28:41.861 NVMe Specification Version (Identify): 1.3 00:28:41.861 Maximum Queue Entries: 1024 00:28:41.861 Contiguous Queues Required: No 00:28:41.861 Arbitration Mechanisms Supported 00:28:41.861 Weighted Round Robin: Not Supported 00:28:41.861 Vendor Specific: Not Supported 00:28:41.861 Reset Timeout: 7500 ms 00:28:41.861 Doorbell Stride: 4 bytes 00:28:41.861 NVM Subsystem Reset: Not Supported 00:28:41.861 Command Sets Supported 00:28:41.861 NVM Command Set: Supported 00:28:41.861 Boot Partition: Not Supported 00:28:41.861 Memory Page Size Minimum: 4096 bytes 00:28:41.861 Memory Page Size Maximum: 4096 bytes 00:28:41.861 Persistent Memory Region: Not Supported 00:28:41.861 Optional Asynchronous Events Supported 00:28:41.861 Namespace Attribute Notices: Not Supported 00:28:41.861 Firmware Activation Notices: Not Supported 00:28:41.861 ANA Change Notices: Not Supported 00:28:41.861 PLE Aggregate Log Change Notices: Not Supported 00:28:41.861 LBA Status Info Alert Notices: Not Supported 00:28:41.861 EGE Aggregate Log Change Notices: Not Supported 00:28:41.861 Normal NVM Subsystem Shutdown event: Not Supported 00:28:41.861 Zone Descriptor Change Notices: Not Supported 00:28:41.861 Discovery Log Change Notices: Supported 00:28:41.861 Controller Attributes 00:28:41.861 128-bit Host Identifier: Not Supported 00:28:41.861 Non-Operational Permissive Mode: Not Supported 00:28:41.861 NVM Sets: Not Supported 00:28:41.861 Read Recovery Levels: Not Supported 00:28:41.861 Endurance Groups: Not Supported 00:28:41.861 Predictable Latency Mode: Not Supported 00:28:41.861 Traffic Based Keep ALive: Not Supported 00:28:41.861 Namespace Granularity: Not Supported 00:28:41.861 SQ Associations: Not Supported 00:28:41.861 UUID List: Not Supported 00:28:41.861 Multi-Domain Subsystem: Not Supported 00:28:41.861 Fixed Capacity Management: Not Supported 00:28:41.861 Variable Capacity Management: Not Supported 00:28:41.861 Delete Endurance Group: Not Supported 00:28:41.861 Delete NVM Set: Not Supported 00:28:41.861 Extended LBA Formats Supported: Not Supported 00:28:41.861 Flexible Data Placement Supported: Not Supported 00:28:41.861 00:28:41.861 Controller Memory Buffer Support 00:28:41.861 ================================ 00:28:41.861 Supported: No 00:28:41.861 00:28:41.861 Persistent Memory Region Support 00:28:41.861 ================================ 00:28:41.861 Supported: No 00:28:41.861 00:28:41.861 Admin Command Set Attributes 00:28:41.861 ============================ 00:28:41.861 Security Send/Receive: Not Supported 00:28:41.861 Format NVM: Not Supported 00:28:41.861 Firmware Activate/Download: Not Supported 00:28:41.861 Namespace Management: Not Supported 00:28:41.861 Device Self-Test: Not Supported 00:28:41.861 Directives: Not Supported 00:28:41.861 NVMe-MI: Not Supported 00:28:41.861 Virtualization Management: Not Supported 00:28:41.861 Doorbell Buffer Config: Not Supported 00:28:41.861 Get LBA Status Capability: Not Supported 00:28:41.861 Command & Feature Lockdown Capability: Not Supported 00:28:41.861 Abort Command Limit: 1 00:28:41.861 Async Event Request Limit: 1 00:28:41.861 Number of Firmware Slots: N/A 00:28:41.861 Firmware Slot 1 Read-Only: N/A 00:28:41.861 Firmware Activation Without Reset: N/A 00:28:41.861 Multiple Update Detection Support: N/A 00:28:41.861 Firmware Update Granularity: No Information Provided 00:28:41.861 Per-Namespace SMART Log: No 00:28:41.861 Asymmetric Namespace Access Log Page: Not Supported 00:28:41.861 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:41.861 Command Effects Log Page: Not Supported 00:28:41.861 Get Log Page Extended Data: Supported 00:28:41.861 Telemetry Log Pages: Not Supported 00:28:41.861 Persistent Event Log Pages: Not Supported 00:28:41.861 Supported Log Pages Log Page: May Support 00:28:41.861 Commands Supported & Effects Log Page: Not Supported 00:28:41.861 Feature Identifiers & Effects Log Page:May Support 00:28:41.861 NVMe-MI Commands & Effects Log Page: May Support 00:28:41.861 Data Area 4 for Telemetry Log: Not Supported 00:28:41.861 Error Log Page Entries Supported: 1 00:28:41.861 Keep Alive: Not Supported 00:28:41.861 00:28:41.861 NVM Command Set Attributes 00:28:41.861 ========================== 00:28:41.861 Submission Queue Entry Size 00:28:41.861 Max: 1 00:28:41.861 Min: 1 00:28:41.861 Completion Queue Entry Size 00:28:41.861 Max: 1 00:28:41.861 Min: 1 00:28:41.861 Number of Namespaces: 0 00:28:41.861 Compare Command: Not Supported 00:28:41.861 Write Uncorrectable Command: Not Supported 00:28:41.861 Dataset Management Command: Not Supported 00:28:41.861 Write Zeroes Command: Not Supported 00:28:41.861 Set Features Save Field: Not Supported 00:28:41.861 Reservations: Not Supported 00:28:41.861 Timestamp: Not Supported 00:28:41.861 Copy: Not Supported 00:28:41.861 Volatile Write Cache: Not Present 00:28:41.861 Atomic Write Unit (Normal): 1 00:28:41.861 Atomic Write Unit (PFail): 1 00:28:41.861 Atomic Compare & Write Unit: 1 00:28:41.861 Fused Compare & Write: Not Supported 00:28:41.861 Scatter-Gather List 00:28:41.861 SGL Command Set: Supported 00:28:41.861 SGL Keyed: Not Supported 00:28:41.861 SGL Bit Bucket Descriptor: Not Supported 00:28:41.861 SGL Metadata Pointer: Not Supported 00:28:41.861 Oversized SGL: Not Supported 00:28:41.861 SGL Metadata Address: Not Supported 00:28:41.861 SGL Offset: Supported 00:28:41.861 Transport SGL Data Block: Not Supported 00:28:41.861 Replay Protected Memory Block: Not Supported 00:28:41.861 00:28:41.861 Firmware Slot Information 00:28:41.861 ========================= 00:28:41.861 Active slot: 0 00:28:41.861 00:28:41.861 00:28:41.861 Error Log 00:28:41.861 ========= 00:28:41.861 00:28:41.861 Active Namespaces 00:28:41.861 ================= 00:28:41.861 Discovery Log Page 00:28:41.861 ================== 00:28:41.862 Generation Counter: 2 00:28:41.862 Number of Records: 2 00:28:41.862 Record Format: 0 00:28:41.862 00:28:41.862 Discovery Log Entry 0 00:28:41.862 ---------------------- 00:28:41.862 Transport Type: 3 (TCP) 00:28:41.862 Address Family: 1 (IPv4) 00:28:41.862 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:41.862 Entry Flags: 00:28:41.862 Duplicate Returned Information: 0 00:28:41.862 Explicit Persistent Connection Support for Discovery: 0 00:28:41.862 Transport Requirements: 00:28:41.862 Secure Channel: Not Specified 00:28:41.862 Port ID: 1 (0x0001) 00:28:41.862 Controller ID: 65535 (0xffff) 00:28:41.862 Admin Max SQ Size: 32 00:28:41.862 Transport Service Identifier: 4420 00:28:41.862 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:41.862 Transport Address: 10.0.0.1 00:28:41.862 Discovery Log Entry 1 00:28:41.862 ---------------------- 00:28:41.862 Transport Type: 3 (TCP) 00:28:41.862 Address Family: 1 (IPv4) 00:28:41.862 Subsystem Type: 2 (NVM Subsystem) 00:28:41.862 Entry Flags: 00:28:41.862 Duplicate Returned Information: 0 00:28:41.862 Explicit Persistent Connection Support for Discovery: 0 00:28:41.862 Transport Requirements: 00:28:41.862 Secure Channel: Not Specified 00:28:41.862 Port ID: 1 (0x0001) 00:28:41.862 Controller ID: 65535 (0xffff) 00:28:41.862 Admin Max SQ Size: 32 00:28:41.862 Transport Service Identifier: 4420 00:28:41.862 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:41.862 Transport Address: 10.0.0.1 00:28:41.862 09:37:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:41.862 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.862 get_feature(0x01) failed 00:28:41.862 get_feature(0x02) failed 00:28:41.862 get_feature(0x04) failed 00:28:41.862 ===================================================== 00:28:41.862 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:41.862 ===================================================== 00:28:41.862 Controller Capabilities/Features 00:28:41.862 ================================ 00:28:41.862 Vendor ID: 0000 00:28:41.862 Subsystem Vendor ID: 0000 00:28:41.862 Serial Number: 9ca4f7501428461f5596 00:28:41.862 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:41.862 Firmware Version: 6.7.0-68 00:28:41.862 Recommended Arb Burst: 6 00:28:41.862 IEEE OUI Identifier: 00 00 00 00:28:41.862 Multi-path I/O 00:28:41.862 May have multiple subsystem ports: Yes 00:28:41.862 May have multiple controllers: Yes 00:28:41.862 Associated with SR-IOV VF: No 00:28:41.862 Max Data Transfer Size: Unlimited 00:28:41.862 Max Number of Namespaces: 1024 00:28:41.862 Max Number of I/O Queues: 128 00:28:41.862 NVMe Specification Version (VS): 1.3 00:28:41.862 NVMe Specification Version (Identify): 1.3 00:28:41.862 Maximum Queue Entries: 1024 00:28:41.862 Contiguous Queues Required: No 00:28:41.862 Arbitration Mechanisms Supported 00:28:41.862 Weighted Round Robin: Not Supported 00:28:41.862 Vendor Specific: Not Supported 00:28:41.862 Reset Timeout: 7500 ms 00:28:41.862 Doorbell Stride: 4 bytes 00:28:41.862 NVM Subsystem Reset: Not Supported 00:28:41.862 Command Sets Supported 00:28:41.862 NVM Command Set: Supported 00:28:41.862 Boot Partition: Not Supported 00:28:41.862 Memory Page Size Minimum: 4096 bytes 00:28:41.862 Memory Page Size Maximum: 4096 bytes 00:28:41.862 Persistent Memory Region: Not Supported 00:28:41.862 Optional Asynchronous Events Supported 00:28:41.862 Namespace Attribute Notices: Supported 00:28:41.862 Firmware Activation Notices: Not Supported 00:28:41.862 ANA Change Notices: Supported 00:28:41.862 PLE Aggregate Log Change Notices: Not Supported 00:28:41.862 LBA Status Info Alert Notices: Not Supported 00:28:41.862 EGE Aggregate Log Change Notices: Not Supported 00:28:41.862 Normal NVM Subsystem Shutdown event: Not Supported 00:28:41.862 Zone Descriptor Change Notices: Not Supported 00:28:41.862 Discovery Log Change Notices: Not Supported 00:28:41.862 Controller Attributes 00:28:41.862 128-bit Host Identifier: Supported 00:28:41.862 Non-Operational Permissive Mode: Not Supported 00:28:41.862 NVM Sets: Not Supported 00:28:41.862 Read Recovery Levels: Not Supported 00:28:41.862 Endurance Groups: Not Supported 00:28:41.862 Predictable Latency Mode: Not Supported 00:28:41.862 Traffic Based Keep ALive: Supported 00:28:41.862 Namespace Granularity: Not Supported 00:28:41.862 SQ Associations: Not Supported 00:28:41.862 UUID List: Not Supported 00:28:41.862 Multi-Domain Subsystem: Not Supported 00:28:41.862 Fixed Capacity Management: Not Supported 00:28:41.862 Variable Capacity Management: Not Supported 00:28:41.862 Delete Endurance Group: Not Supported 00:28:41.862 Delete NVM Set: Not Supported 00:28:41.862 Extended LBA Formats Supported: Not Supported 00:28:41.862 Flexible Data Placement Supported: Not Supported 00:28:41.862 00:28:41.862 Controller Memory Buffer Support 00:28:41.862 ================================ 00:28:41.862 Supported: No 00:28:41.862 00:28:41.862 Persistent Memory Region Support 00:28:41.862 ================================ 00:28:41.862 Supported: No 00:28:41.862 00:28:41.862 Admin Command Set Attributes 00:28:41.862 ============================ 00:28:41.862 Security Send/Receive: Not Supported 00:28:41.862 Format NVM: Not Supported 00:28:41.862 Firmware Activate/Download: Not Supported 00:28:41.862 Namespace Management: Not Supported 00:28:41.862 Device Self-Test: Not Supported 00:28:41.862 Directives: Not Supported 00:28:41.862 NVMe-MI: Not Supported 00:28:41.862 Virtualization Management: Not Supported 00:28:41.862 Doorbell Buffer Config: Not Supported 00:28:41.862 Get LBA Status Capability: Not Supported 00:28:41.862 Command & Feature Lockdown Capability: Not Supported 00:28:41.862 Abort Command Limit: 4 00:28:41.862 Async Event Request Limit: 4 00:28:41.862 Number of Firmware Slots: N/A 00:28:41.862 Firmware Slot 1 Read-Only: N/A 00:28:41.862 Firmware Activation Without Reset: N/A 00:28:41.862 Multiple Update Detection Support: N/A 00:28:41.862 Firmware Update Granularity: No Information Provided 00:28:41.862 Per-Namespace SMART Log: Yes 00:28:41.862 Asymmetric Namespace Access Log Page: Supported 00:28:41.862 ANA Transition Time : 10 sec 00:28:41.862 00:28:41.862 Asymmetric Namespace Access Capabilities 00:28:41.862 ANA Optimized State : Supported 00:28:41.862 ANA Non-Optimized State : Supported 00:28:41.862 ANA Inaccessible State : Supported 00:28:41.862 ANA Persistent Loss State : Supported 00:28:41.862 ANA Change State : Supported 00:28:41.862 ANAGRPID is not changed : No 00:28:41.862 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:41.862 00:28:41.862 ANA Group Identifier Maximum : 128 00:28:41.862 Number of ANA Group Identifiers : 128 00:28:41.862 Max Number of Allowed Namespaces : 1024 00:28:41.862 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:41.862 Command Effects Log Page: Supported 00:28:41.862 Get Log Page Extended Data: Supported 00:28:41.862 Telemetry Log Pages: Not Supported 00:28:41.862 Persistent Event Log Pages: Not Supported 00:28:41.862 Supported Log Pages Log Page: May Support 00:28:41.862 Commands Supported & Effects Log Page: Not Supported 00:28:41.862 Feature Identifiers & Effects Log Page:May Support 00:28:41.862 NVMe-MI Commands & Effects Log Page: May Support 00:28:41.862 Data Area 4 for Telemetry Log: Not Supported 00:28:41.862 Error Log Page Entries Supported: 128 00:28:41.862 Keep Alive: Supported 00:28:41.862 Keep Alive Granularity: 1000 ms 00:28:41.862 00:28:41.862 NVM Command Set Attributes 00:28:41.862 ========================== 00:28:41.862 Submission Queue Entry Size 00:28:41.862 Max: 64 00:28:41.862 Min: 64 00:28:41.862 Completion Queue Entry Size 00:28:41.862 Max: 16 00:28:41.862 Min: 16 00:28:41.862 Number of Namespaces: 1024 00:28:41.862 Compare Command: Not Supported 00:28:41.862 Write Uncorrectable Command: Not Supported 00:28:41.862 Dataset Management Command: Supported 00:28:41.862 Write Zeroes Command: Supported 00:28:41.862 Set Features Save Field: Not Supported 00:28:41.862 Reservations: Not Supported 00:28:41.862 Timestamp: Not Supported 00:28:41.862 Copy: Not Supported 00:28:41.862 Volatile Write Cache: Present 00:28:41.862 Atomic Write Unit (Normal): 1 00:28:41.862 Atomic Write Unit (PFail): 1 00:28:41.862 Atomic Compare & Write Unit: 1 00:28:41.862 Fused Compare & Write: Not Supported 00:28:41.862 Scatter-Gather List 00:28:41.862 SGL Command Set: Supported 00:28:41.862 SGL Keyed: Not Supported 00:28:41.862 SGL Bit Bucket Descriptor: Not Supported 00:28:41.862 SGL Metadata Pointer: Not Supported 00:28:41.862 Oversized SGL: Not Supported 00:28:41.862 SGL Metadata Address: Not Supported 00:28:41.862 SGL Offset: Supported 00:28:41.862 Transport SGL Data Block: Not Supported 00:28:41.862 Replay Protected Memory Block: Not Supported 00:28:41.862 00:28:41.862 Firmware Slot Information 00:28:41.862 ========================= 00:28:41.862 Active slot: 0 00:28:41.862 00:28:41.862 Asymmetric Namespace Access 00:28:41.862 =========================== 00:28:41.862 Change Count : 0 00:28:41.862 Number of ANA Group Descriptors : 1 00:28:41.862 ANA Group Descriptor : 0 00:28:41.862 ANA Group ID : 1 00:28:41.863 Number of NSID Values : 1 00:28:41.863 Change Count : 0 00:28:41.863 ANA State : 1 00:28:41.863 Namespace Identifier : 1 00:28:41.863 00:28:41.863 Commands Supported and Effects 00:28:41.863 ============================== 00:28:41.863 Admin Commands 00:28:41.863 -------------- 00:28:41.863 Get Log Page (02h): Supported 00:28:41.863 Identify (06h): Supported 00:28:41.863 Abort (08h): Supported 00:28:41.863 Set Features (09h): Supported 00:28:41.863 Get Features (0Ah): Supported 00:28:41.863 Asynchronous Event Request (0Ch): Supported 00:28:41.863 Keep Alive (18h): Supported 00:28:41.863 I/O Commands 00:28:41.863 ------------ 00:28:41.863 Flush (00h): Supported 00:28:41.863 Write (01h): Supported LBA-Change 00:28:41.863 Read (02h): Supported 00:28:41.863 Write Zeroes (08h): Supported LBA-Change 00:28:41.863 Dataset Management (09h): Supported 00:28:41.863 00:28:41.863 Error Log 00:28:41.863 ========= 00:28:41.863 Entry: 0 00:28:41.863 Error Count: 0x3 00:28:41.863 Submission Queue Id: 0x0 00:28:41.863 Command Id: 0x5 00:28:41.863 Phase Bit: 0 00:28:41.863 Status Code: 0x2 00:28:41.863 Status Code Type: 0x0 00:28:41.863 Do Not Retry: 1 00:28:41.863 Error Location: 0x28 00:28:41.863 LBA: 0x0 00:28:41.863 Namespace: 0x0 00:28:41.863 Vendor Log Page: 0x0 00:28:41.863 ----------- 00:28:41.863 Entry: 1 00:28:41.863 Error Count: 0x2 00:28:41.863 Submission Queue Id: 0x0 00:28:41.863 Command Id: 0x5 00:28:41.863 Phase Bit: 0 00:28:41.863 Status Code: 0x2 00:28:41.863 Status Code Type: 0x0 00:28:41.863 Do Not Retry: 1 00:28:41.863 Error Location: 0x28 00:28:41.863 LBA: 0x0 00:28:41.863 Namespace: 0x0 00:28:41.863 Vendor Log Page: 0x0 00:28:41.863 ----------- 00:28:41.863 Entry: 2 00:28:41.863 Error Count: 0x1 00:28:41.863 Submission Queue Id: 0x0 00:28:41.863 Command Id: 0x4 00:28:41.863 Phase Bit: 0 00:28:41.863 Status Code: 0x2 00:28:41.863 Status Code Type: 0x0 00:28:41.863 Do Not Retry: 1 00:28:41.863 Error Location: 0x28 00:28:41.863 LBA: 0x0 00:28:41.863 Namespace: 0x0 00:28:41.863 Vendor Log Page: 0x0 00:28:41.863 00:28:41.863 Number of Queues 00:28:41.863 ================ 00:28:41.863 Number of I/O Submission Queues: 128 00:28:41.863 Number of I/O Completion Queues: 128 00:28:41.863 00:28:41.863 ZNS Specific Controller Data 00:28:41.863 ============================ 00:28:41.863 Zone Append Size Limit: 0 00:28:41.863 00:28:41.863 00:28:41.863 Active Namespaces 00:28:41.863 ================= 00:28:41.863 get_feature(0x05) failed 00:28:41.863 Namespace ID:1 00:28:41.863 Command Set Identifier: NVM (00h) 00:28:41.863 Deallocate: Supported 00:28:41.863 Deallocated/Unwritten Error: Not Supported 00:28:41.863 Deallocated Read Value: Unknown 00:28:41.863 Deallocate in Write Zeroes: Not Supported 00:28:41.863 Deallocated Guard Field: 0xFFFF 00:28:41.863 Flush: Supported 00:28:41.863 Reservation: Not Supported 00:28:41.863 Namespace Sharing Capabilities: Multiple Controllers 00:28:41.863 Size (in LBAs): 3750748848 (1788GiB) 00:28:41.863 Capacity (in LBAs): 3750748848 (1788GiB) 00:28:41.863 Utilization (in LBAs): 3750748848 (1788GiB) 00:28:41.863 UUID: 63462e4d-ac57-408b-b067-aa2806bb61ae 00:28:41.863 Thin Provisioning: Not Supported 00:28:41.863 Per-NS Atomic Units: Yes 00:28:41.863 Atomic Write Unit (Normal): 8 00:28:41.863 Atomic Write Unit (PFail): 8 00:28:41.863 Preferred Write Granularity: 8 00:28:41.863 Atomic Compare & Write Unit: 8 00:28:41.863 Atomic Boundary Size (Normal): 0 00:28:41.863 Atomic Boundary Size (PFail): 0 00:28:41.863 Atomic Boundary Offset: 0 00:28:41.863 NGUID/EUI64 Never Reused: No 00:28:41.863 ANA group ID: 1 00:28:41.863 Namespace Write Protected: No 00:28:41.863 Number of LBA Formats: 1 00:28:41.863 Current LBA Format: LBA Format #00 00:28:41.863 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:41.863 00:28:41.863 09:37:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:41.863 09:37:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:41.863 09:37:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:28:41.863 09:37:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:41.863 09:37:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:28:41.863 09:37:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:41.863 09:37:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:41.863 rmmod nvme_tcp 00:28:41.863 rmmod nvme_fabrics 00:28:41.863 09:37:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:42.123 09:37:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:28:42.123 09:37:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:28:42.123 09:37:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:42.123 09:37:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:42.123 09:37:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:42.123 09:37:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:42.123 09:37:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:42.123 09:37:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:42.123 09:37:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.123 09:37:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:42.123 09:37:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.037 09:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:44.037 09:37:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:44.037 09:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:44.037 09:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:28:44.037 09:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:44.037 09:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:44.037 09:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:44.037 09:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:44.037 09:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:44.037 09:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:44.037 09:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:48.246 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:48.246 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:48.246 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:48.246 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:48.246 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:48.246 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:48.246 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:48.246 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:48.246 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:48.246 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:48.246 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:48.246 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:48.246 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:48.246 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:48.246 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:48.246 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:48.246 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:48.246 00:28:48.246 real 0m19.803s 00:28:48.246 user 0m5.382s 00:28:48.246 sys 0m11.506s 00:28:48.246 09:37:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:48.246 09:37:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:48.247 ************************************ 00:28:48.247 END TEST nvmf_identify_kernel_target 00:28:48.247 ************************************ 00:28:48.247 09:37:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:48.247 09:37:35 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:48.247 09:37:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:48.247 09:37:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:48.247 09:37:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:48.247 ************************************ 00:28:48.247 START TEST nvmf_auth_host 00:28:48.247 ************************************ 00:28:48.247 09:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:48.507 * Looking for test storage... 00:28:48.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.507 09:37:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:28:48.508 09:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:56.656 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:56.656 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:56.656 Found net devices under 0000:31:00.0: cvl_0_0 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:56.656 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:56.657 Found net devices under 0000:31:00.1: cvl_0_1 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:56.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:28:56.657 00:28:56.657 --- 10.0.0.2 ping statistics --- 00:28:56.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.657 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:56.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:28:56.657 00:28:56.657 --- 10.0.0.1 ping statistics --- 00:28:56.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.657 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=864922 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 864922 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 864922 ']' 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:56.657 09:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.601 09:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a1a8ff6ef746a8b6e272cde5b72569a2 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.rZc 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a1a8ff6ef746a8b6e272cde5b72569a2 0 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a1a8ff6ef746a8b6e272cde5b72569a2 0 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a1a8ff6ef746a8b6e272cde5b72569a2 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.rZc 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.rZc 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.rZc 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d1cda8228fc37de309fa0ffd407ac535b3174e7dbfde37f97e0c5693139dad84 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.NpE 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d1cda8228fc37de309fa0ffd407ac535b3174e7dbfde37f97e0c5693139dad84 3 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d1cda8228fc37de309fa0ffd407ac535b3174e7dbfde37f97e0c5693139dad84 3 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d1cda8228fc37de309fa0ffd407ac535b3174e7dbfde37f97e0c5693139dad84 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.NpE 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.NpE 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.NpE 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d649bde83bf6f79e2113e1ca16e2f6df21d79c1dd1fac8e8 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Oau 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d649bde83bf6f79e2113e1ca16e2f6df21d79c1dd1fac8e8 0 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d649bde83bf6f79e2113e1ca16e2f6df21d79c1dd1fac8e8 0 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d649bde83bf6f79e2113e1ca16e2f6df21d79c1dd1fac8e8 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Oau 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Oau 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Oau 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=beda6c830c7b30d7365c7f9872db9172690e27ea1fa1049d 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Pnt 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key beda6c830c7b30d7365c7f9872db9172690e27ea1fa1049d 2 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 beda6c830c7b30d7365c7f9872db9172690e27ea1fa1049d 2 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=beda6c830c7b30d7365c7f9872db9172690e27ea1fa1049d 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:57.602 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Pnt 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Pnt 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Pnt 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f934e8d47ef2619a48b61d49777da4e1 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.1nZ 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f934e8d47ef2619a48b61d49777da4e1 1 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f934e8d47ef2619a48b61d49777da4e1 1 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f934e8d47ef2619a48b61d49777da4e1 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.1nZ 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.1nZ 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.1nZ 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0e5b33cefeae3548fb04ee84b8c4d581 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0tq 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0e5b33cefeae3548fb04ee84b8c4d581 1 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0e5b33cefeae3548fb04ee84b8c4d581 1 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0e5b33cefeae3548fb04ee84b8c4d581 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0tq 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0tq 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.0tq 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=adbf878a50c87476d3cedf941db1739852e82dca26d8bed1 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.a6H 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key adbf878a50c87476d3cedf941db1739852e82dca26d8bed1 2 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 adbf878a50c87476d3cedf941db1739852e82dca26d8bed1 2 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:57.865 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=adbf878a50c87476d3cedf941db1739852e82dca26d8bed1 00:28:57.866 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:57.866 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:57.866 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.a6H 00:28:57.866 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.a6H 00:28:57.866 09:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.a6H 00:28:57.866 09:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:57.866 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:57.866 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:57.866 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:57.866 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:57.866 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:57.866 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:57.866 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=32ecc2b57b50e1cec117835baf3395ac 00:28:57.866 09:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:57.866 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.NZa 00:28:57.866 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 32ecc2b57b50e1cec117835baf3395ac 0 00:28:57.866 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 32ecc2b57b50e1cec117835baf3395ac 0 00:28:57.866 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:57.866 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:57.866 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=32ecc2b57b50e1cec117835baf3395ac 00:28:57.866 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:57.866 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:57.866 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.NZa 00:28:57.866 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.NZa 00:28:57.866 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.NZa 00:28:57.866 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:57.866 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:57.866 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:57.866 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:57.866 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:57.866 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:57.866 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:57.866 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3690bba587b46234aa5ade03acdf51d701298c0dd4c6209ef70860a63ffe7238 00:28:58.127 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:58.127 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.BWX 00:28:58.127 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3690bba587b46234aa5ade03acdf51d701298c0dd4c6209ef70860a63ffe7238 3 00:28:58.127 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3690bba587b46234aa5ade03acdf51d701298c0dd4c6209ef70860a63ffe7238 3 00:28:58.127 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:58.127 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:58.127 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3690bba587b46234aa5ade03acdf51d701298c0dd4c6209ef70860a63ffe7238 00:28:58.127 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:58.127 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:58.127 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.BWX 00:28:58.127 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.BWX 00:28:58.127 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.BWX 00:28:58.127 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:58.127 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 864922 00:28:58.127 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 864922 ']' 00:28:58.127 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.127 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rZc 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.NpE ]] 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NpE 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Oau 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Pnt ]] 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pnt 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.128 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.1nZ 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.0tq ]] 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0tq 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.a6H 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.NZa ]] 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.NZa 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.BWX 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:58.389 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:58.390 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:58.390 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:58.390 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:28:58.390 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:58.390 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:58.390 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:58.390 09:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:02.592 Waiting for block devices as requested 00:29:02.592 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:02.592 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:02.592 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:02.592 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:02.592 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:02.592 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:02.592 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:02.592 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:02.592 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:02.852 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:02.852 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:02.852 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:03.112 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:03.112 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:03.112 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:03.371 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:03.371 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:03.942 No valid GPT data, bailing 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:03.942 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:29:03.942 00:29:03.942 Discovery Log Number of Records 2, Generation counter 2 00:29:03.942 =====Discovery Log Entry 0====== 00:29:03.942 trtype: tcp 00:29:03.942 adrfam: ipv4 00:29:03.942 subtype: current discovery subsystem 00:29:03.942 treq: not specified, sq flow control disable supported 00:29:03.942 portid: 1 00:29:03.942 trsvcid: 4420 00:29:03.942 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:03.942 traddr: 10.0.0.1 00:29:03.942 eflags: none 00:29:03.942 sectype: none 00:29:03.942 =====Discovery Log Entry 1====== 00:29:03.942 trtype: tcp 00:29:03.942 adrfam: ipv4 00:29:03.942 subtype: nvme subsystem 00:29:03.942 treq: not specified, sq flow control disable supported 00:29:03.942 portid: 1 00:29:03.942 trsvcid: 4420 00:29:03.942 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:03.942 traddr: 10.0.0.1 00:29:03.942 eflags: none 00:29:03.942 sectype: none 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: ]] 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.203 nvme0n1 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: ]] 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:04.203 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:04.204 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:04.204 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.204 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:04.204 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.204 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.204 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.204 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.204 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:04.204 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:04.204 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:04.204 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.204 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.204 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:04.204 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.204 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.464 nvme0n1 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: ]] 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.464 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.724 nvme0n1 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: ]] 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.724 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.725 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:04.725 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:04.725 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:04.725 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.725 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.725 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:04.725 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.725 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:04.725 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:04.725 09:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:04.725 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:04.725 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.725 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.986 nvme0n1 00:29:04.986 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.986 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.986 09:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.986 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.986 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.986 09:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: ]] 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.986 nvme0n1 00:29:04.986 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.247 nvme0n1 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.247 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: ]] 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.508 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.509 nvme0n1 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.509 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.768 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.768 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.768 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:05.768 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.768 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: ]] 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.769 nvme0n1 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.769 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.029 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.029 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.029 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:06.029 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.029 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:06.029 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:06.029 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:06.029 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:06.029 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:06.029 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:06.029 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: ]] 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.030 09:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.030 nvme0n1 00:29:06.030 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.030 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.030 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.030 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.030 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.030 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.030 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.030 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.030 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.030 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: ]] 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.289 nvme0n1 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.289 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.548 nvme0n1 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.548 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: ]] 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.808 09:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.082 nvme0n1 00:29:07.082 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.082 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.082 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.082 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.082 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.082 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.082 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.082 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.082 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.082 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.082 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.082 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.082 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:07.082 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.082 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:07.082 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: ]] 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.083 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.462 nvme0n1 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: ]] 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.462 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.723 nvme0n1 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: ]] 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.723 09:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.983 nvme0n1 00:29:07.983 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.983 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.983 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.983 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.983 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.983 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.983 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.983 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.983 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.983 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.983 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.983 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.983 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:07.983 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.984 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.243 nvme0n1 00:29:08.244 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.244 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.244 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.244 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.244 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.244 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.503 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.503 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.503 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.503 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.503 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.503 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:08.503 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.503 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:08.503 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.503 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:08.503 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:08.503 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: ]] 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.504 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.764 nvme0n1 00:29:08.764 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.764 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.764 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.764 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.764 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.764 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.025 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.025 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.025 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.025 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.025 09:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.025 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.025 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:09.025 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.025 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:09.025 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:09.025 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:09.025 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:09.025 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:09.025 09:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: ]] 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.025 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.285 nvme0n1 00:29:09.286 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.286 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.286 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.286 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.286 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.286 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: ]] 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.546 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.807 nvme0n1 00:29:09.807 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.807 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.807 09:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.807 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.807 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.807 09:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: ]] 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:10.067 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:10.068 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.068 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.068 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:10.068 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.068 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:10.068 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:10.068 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:10.068 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:10.068 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.068 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.327 nvme0n1 00:29:10.327 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.327 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.327 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.327 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.327 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.327 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.588 09:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.849 nvme0n1 00:29:10.849 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.849 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.849 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.849 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.849 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: ]] 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.109 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.680 nvme0n1 00:29:11.680 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.680 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.680 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.680 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.680 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.680 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.940 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.940 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.940 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.940 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.940 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.940 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.940 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: ]] 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.941 09:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.512 nvme0n1 00:29:12.512 09:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.512 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.512 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.512 09:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.512 09:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.512 09:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.512 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.512 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.512 09:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.512 09:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: ]] 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.773 09:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.344 nvme0n1 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:13.344 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: ]] 00:29:13.345 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:13.345 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:13.345 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.345 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:13.345 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:13.345 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:13.345 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.345 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:13.345 09:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.345 09:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.605 09:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.605 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.605 09:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:13.605 09:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:13.605 09:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:13.605 09:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.605 09:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.605 09:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:13.605 09:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.605 09:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:13.605 09:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:13.605 09:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:13.605 09:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:13.605 09:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.605 09:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.176 nvme0n1 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.176 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:14.177 09:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.177 09:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.177 09:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.177 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.177 09:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:14.177 09:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:14.177 09:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:14.177 09:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.177 09:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.177 09:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:14.177 09:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.177 09:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:14.177 09:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:14.177 09:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:14.177 09:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:14.177 09:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.177 09:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.117 nvme0n1 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: ]] 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.117 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.118 nvme0n1 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.118 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.378 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.378 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.378 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.378 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.378 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.378 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.378 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.378 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: ]] 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.379 nvme0n1 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.379 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: ]] 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.640 nvme0n1 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: ]] 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.640 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.901 nvme0n1 00:29:15.901 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.901 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.901 09:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.901 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.901 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.901 09:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.901 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.160 nvme0n1 00:29:16.160 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.160 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.160 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.160 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.160 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.160 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.160 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.160 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.160 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.160 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.160 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.160 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:16.160 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.160 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: ]] 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.161 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.421 nvme0n1 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: ]] 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.421 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.681 nvme0n1 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: ]] 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.681 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.941 nvme0n1 00:29:16.941 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.941 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.941 09:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.941 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.941 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.941 09:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.941 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.941 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.941 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.941 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.941 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.941 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.941 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:16.941 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.941 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:16.941 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:16.941 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:16.941 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:16.941 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:16.941 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: ]] 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.942 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.202 nvme0n1 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.202 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.463 nvme0n1 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: ]] 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:17.463 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.464 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.464 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:17.464 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.464 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:17.464 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:17.464 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:17.464 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:17.464 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.464 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.725 nvme0n1 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: ]] 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.725 09:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.985 nvme0n1 00:29:17.985 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.986 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.986 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.986 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.986 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.986 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: ]] 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.246 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.507 nvme0n1 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: ]] 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.507 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.767 nvme0n1 00:29:18.767 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.028 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.028 09:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.028 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.028 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.028 09:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.028 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.290 nvme0n1 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: ]] 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.290 09:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.860 nvme0n1 00:29:19.860 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.860 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.860 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.860 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.860 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: ]] 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:20.119 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.120 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:20.120 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.120 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.120 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.120 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.120 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:20.120 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:20.120 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:20.120 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.120 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.120 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:20.120 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.120 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:20.120 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:20.120 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:20.120 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:20.120 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.120 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.689 nvme0n1 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: ]] 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.689 09:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.259 nvme0n1 00:29:21.259 09:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.259 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.259 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.259 09:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.259 09:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.259 09:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: ]] 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.520 09:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.090 nvme0n1 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.090 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.659 nvme0n1 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: ]] 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:22.659 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.660 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:22.660 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:22.660 09:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:22.660 09:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:22.660 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.660 09:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.230 nvme0n1 00:29:23.230 09:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.230 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.230 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.230 09:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.230 09:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.230 09:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: ]] 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.490 09:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.060 nvme0n1 00:29:24.060 09:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.060 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.060 09:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.060 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.060 09:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.060 09:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.321 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: ]] 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.322 09:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.900 nvme0n1 00:29:24.900 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.900 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.900 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.900 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.900 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.900 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.900 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.900 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.900 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.900 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: ]] 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.165 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.736 nvme0n1 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.736 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.997 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.997 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:25.997 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:25.997 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:25.997 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.997 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.997 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:25.997 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.997 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:25.997 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:25.997 09:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:25.997 09:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:25.997 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.997 09:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.568 nvme0n1 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: ]] 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.568 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:26.569 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:26.569 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:26.569 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:26.569 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.569 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.829 nvme0n1 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: ]] 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.829 09:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.090 nvme0n1 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: ]] 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.090 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.350 nvme0n1 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: ]] 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.350 nvme0n1 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.350 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.610 nvme0n1 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.610 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.611 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.611 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.611 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: ]] 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.906 09:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.906 nvme0n1 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: ]] 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.906 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.167 nvme0n1 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: ]] 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:28.167 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:28.168 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:28.168 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:28.168 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.168 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.428 nvme0n1 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: ]] 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.428 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.689 nvme0n1 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.689 09:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.950 nvme0n1 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: ]] 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.950 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:28.951 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.951 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:28.951 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:28.951 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:28.951 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:28.951 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.951 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.211 nvme0n1 00:29:29.211 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.211 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.211 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.211 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.211 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: ]] 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.471 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.732 nvme0n1 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: ]] 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.732 09:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.993 nvme0n1 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: ]] 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:29.993 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:30.253 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.253 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.253 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:30.253 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.253 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:30.253 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:30.253 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:30.253 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:30.253 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.253 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.514 nvme0n1 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.514 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.775 nvme0n1 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: ]] 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:30.775 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.776 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:30.776 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.776 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.776 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.776 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.776 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:30.776 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:30.776 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:30.776 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.776 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.776 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:30.776 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.776 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:30.776 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:30.776 09:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:30.776 09:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:30.776 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.776 09:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.347 nvme0n1 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: ]] 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.347 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.917 nvme0n1 00:29:31.917 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.917 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.917 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.917 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.917 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.917 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.917 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: ]] 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.918 09:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.489 nvme0n1 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: ]] 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.489 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.490 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:32.490 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.490 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:32.490 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:32.490 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:32.490 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:32.490 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.490 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.750 nvme0n1 00:29:32.750 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.750 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.750 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.750 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.750 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.750 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:33.011 09:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:33.012 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.012 09:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.271 nvme0n1 00:29:33.271 09:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.271 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.271 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.271 09:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.271 09:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.271 09:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhOGZmNmVmNzQ2YThiNmUyNzJjZGU1YjcyNTY5YTJrJJx/: 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: ]] 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjZGE4MjI4ZmMzN2RlMzA5ZmEwZmZkNDA3YWM1MzViMzE3NGU3ZGJmZGUzN2Y5N2UwYzU2OTMxMzlkYWQ4NIaVuSc=: 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.532 09:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.102 nvme0n1 00:29:34.102 09:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.102 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.102 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.102 09:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.102 09:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: ]] 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:34.362 09:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:34.363 09:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:34.363 09:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:34.363 09:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.363 09:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.933 nvme0n1 00:29:34.933 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.933 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.933 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.933 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.933 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.933 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjkzNGU4ZDQ3ZWYyNjE5YTQ4YjYxZDQ5Nzc3ZGE0ZTEfYLKK: 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: ]] 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGU1YjMzY2VmZWFlMzU0OGZiMDRlZTg0YjhjNGQ1ODGHRLlr: 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.194 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:35.195 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.195 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.195 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.195 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.195 09:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:35.195 09:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:35.195 09:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:35.195 09:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.195 09:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.195 09:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:35.195 09:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.195 09:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:35.195 09:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:35.195 09:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:35.195 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:35.195 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.195 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.767 nvme0n1 00:29:35.767 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.767 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.767 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.767 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.767 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.767 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRiZjg3OGE1MGM4NzQ3NmQzY2VkZjk0MWRiMTczOTg1MmU4MmRjYTI2ZDhiZWQxBSYIcQ==: 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: ]] 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlY2MyYjU3YjUwZTFjZWMxMTc4MzViYWYzMzk1YWMnPr7d: 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.027 09:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.027 09:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.027 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.027 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:36.027 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:36.027 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:36.027 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.027 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.027 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:36.027 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.027 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:36.027 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:36.027 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:36.027 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:36.027 09:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.027 09:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.596 nvme0n1 00:29:36.596 09:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.596 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.596 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.596 09:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.596 09:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.596 09:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.596 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.596 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.596 09:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.596 09:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzY5MGJiYTU4N2I0NjIzNGFhNWFkZTAzYWNkZjUxZDcwMTI5OGMwZGQ0YzYyMDllZjcwODYwYTYzZmZlNzIzOA5VQ/4=: 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.857 09:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.427 nvme0n1 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDY0OWJkZTgzYmY2Zjc5ZTIxMTNlMWNhMTZlMmY2ZGYyMWQ3OWMxZGQxZmFjOGU4USPrAg==: 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: ]] 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVkYTZjODMwYzdiMzBkNzM2NWM3Zjk4NzJkYjkxNzI2OTBlMjdlYTFmYTEwNDlk2ci5jA==: 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.427 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.688 request: 00:29:37.688 { 00:29:37.688 "name": "nvme0", 00:29:37.688 "trtype": "tcp", 00:29:37.688 "traddr": "10.0.0.1", 00:29:37.688 "adrfam": "ipv4", 00:29:37.688 "trsvcid": "4420", 00:29:37.688 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:37.688 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:37.688 "prchk_reftag": false, 00:29:37.688 "prchk_guard": false, 00:29:37.688 "hdgst": false, 00:29:37.688 "ddgst": false, 00:29:37.688 "method": "bdev_nvme_attach_controller", 00:29:37.688 "req_id": 1 00:29:37.688 } 00:29:37.688 Got JSON-RPC error response 00:29:37.688 response: 00:29:37.688 { 00:29:37.688 "code": -5, 00:29:37.688 "message": "Input/output error" 00:29:37.688 } 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.688 request: 00:29:37.688 { 00:29:37.688 "name": "nvme0", 00:29:37.688 "trtype": "tcp", 00:29:37.688 "traddr": "10.0.0.1", 00:29:37.688 "adrfam": "ipv4", 00:29:37.688 "trsvcid": "4420", 00:29:37.688 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:37.688 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:37.688 "prchk_reftag": false, 00:29:37.688 "prchk_guard": false, 00:29:37.688 "hdgst": false, 00:29:37.688 "ddgst": false, 00:29:37.688 "dhchap_key": "key2", 00:29:37.688 "method": "bdev_nvme_attach_controller", 00:29:37.688 "req_id": 1 00:29:37.688 } 00:29:37.688 Got JSON-RPC error response 00:29:37.688 response: 00:29:37.688 { 00:29:37.688 "code": -5, 00:29:37.688 "message": "Input/output error" 00:29:37.688 } 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:37.688 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:37.689 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:37.689 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.689 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.689 request: 00:29:37.689 { 00:29:37.689 "name": "nvme0", 00:29:37.689 "trtype": "tcp", 00:29:37.689 "traddr": "10.0.0.1", 00:29:37.689 "adrfam": "ipv4", 00:29:37.689 "trsvcid": "4420", 00:29:37.689 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:37.689 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:37.689 "prchk_reftag": false, 00:29:37.689 "prchk_guard": false, 00:29:37.689 "hdgst": false, 00:29:37.689 "ddgst": false, 00:29:37.689 "dhchap_key": "key1", 00:29:37.689 "dhchap_ctrlr_key": "ckey2", 00:29:37.689 "method": "bdev_nvme_attach_controller", 00:29:37.689 "req_id": 1 00:29:37.689 } 00:29:37.689 Got JSON-RPC error response 00:29:37.689 response: 00:29:37.689 { 00:29:37.689 "code": -5, 00:29:37.689 "message": "Input/output error" 00:29:37.689 } 00:29:37.689 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:37.689 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:29:37.689 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:37.689 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:37.689 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:37.689 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:37.689 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:29:37.689 09:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:37.689 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:37.689 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:29:37.689 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:37.689 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:29:37.689 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:37.689 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:37.689 rmmod nvme_tcp 00:29:37.949 rmmod nvme_fabrics 00:29:37.949 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:37.949 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:29:37.949 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:29:37.949 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 864922 ']' 00:29:37.949 09:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 864922 00:29:37.949 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 864922 ']' 00:29:37.949 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 864922 00:29:37.949 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:29:37.949 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:37.949 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 864922 00:29:37.949 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:37.949 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:37.949 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 864922' 00:29:37.949 killing process with pid 864922 00:29:37.949 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 864922 00:29:37.949 09:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 864922 00:29:37.949 09:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:37.949 09:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:37.949 09:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:37.949 09:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:37.949 09:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:37.949 09:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.949 09:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:37.949 09:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.492 09:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:40.492 09:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:40.492 09:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:40.492 09:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:40.492 09:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:40.492 09:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:29:40.492 09:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:40.492 09:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:40.492 09:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:40.492 09:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:40.492 09:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:40.492 09:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:40.492 09:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:44.700 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:44.700 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:44.700 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:44.700 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:44.700 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:44.700 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:44.700 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:44.700 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:44.700 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:44.700 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:44.700 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:44.700 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:44.700 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:44.700 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:44.700 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:44.700 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:44.700 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:44.700 09:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.rZc /tmp/spdk.key-null.Oau /tmp/spdk.key-sha256.1nZ /tmp/spdk.key-sha384.a6H /tmp/spdk.key-sha512.BWX /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:44.700 09:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:48.006 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:48.006 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:48.006 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:48.006 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:48.006 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:48.006 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:48.006 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:48.006 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:48.006 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:48.006 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:48.006 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:48.006 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:48.006 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:48.006 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:48.006 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:48.006 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:48.006 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:48.268 00:29:48.268 real 0m59.910s 00:29:48.268 user 0m53.038s 00:29:48.268 sys 0m16.052s 00:29:48.268 09:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:48.268 09:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.268 ************************************ 00:29:48.268 END TEST nvmf_auth_host 00:29:48.268 ************************************ 00:29:48.268 09:38:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:48.268 09:38:35 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:29:48.268 09:38:35 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:48.268 09:38:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:48.268 09:38:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:48.268 09:38:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:48.268 ************************************ 00:29:48.268 START TEST nvmf_digest 00:29:48.268 ************************************ 00:29:48.268 09:38:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:48.268 * Looking for test storage... 00:29:48.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:48.268 09:38:35 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.529 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:48.529 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.529 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.529 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.529 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.529 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.529 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.529 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.529 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.529 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.529 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.529 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:48.529 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:48.529 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.529 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:29:48.530 09:38:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:56.675 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:56.675 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:29:56.675 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:56.675 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:56.675 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:56.675 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:56.675 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:56.675 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:29:56.675 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:56.675 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:29:56.675 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:29:56.675 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:29:56.675 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:29:56.675 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:29:56.675 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:56.676 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:56.676 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:56.676 Found net devices under 0000:31:00.0: cvl_0_0 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:56.676 Found net devices under 0000:31:00.1: cvl_0_1 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:56.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:29:56.676 00:29:56.676 --- 10.0.0.2 ping statistics --- 00:29:56.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.676 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:29:56.676 00:29:56.676 --- 10.0.0.1 ping statistics --- 00:29:56.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.676 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:56.676 ************************************ 00:29:56.676 START TEST nvmf_digest_clean 00:29:56.676 ************************************ 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=882545 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 882545 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 882545 ']' 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:56.676 09:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:56.936 [2024-07-15 09:38:43.909774] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:29:56.936 [2024-07-15 09:38:43.909829] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.936 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.936 [2024-07-15 09:38:43.985211] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.936 [2024-07-15 09:38:44.048166] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.936 [2024-07-15 09:38:44.048203] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.936 [2024-07-15 09:38:44.048210] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.936 [2024-07-15 09:38:44.048217] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.936 [2024-07-15 09:38:44.048222] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.936 [2024-07-15 09:38:44.048242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.506 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:57.506 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:57.506 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:57.506 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:57.506 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:57.766 null0 00:29:57.766 [2024-07-15 09:38:44.786775] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:57.766 [2024-07-15 09:38:44.810956] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=882838 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 882838 /var/tmp/bperf.sock 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 882838 ']' 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:57.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:57.766 09:38:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:57.766 [2024-07-15 09:38:44.865127] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:29:57.766 [2024-07-15 09:38:44.865172] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid882838 ] 00:29:57.766 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.766 [2024-07-15 09:38:44.946495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.027 [2024-07-15 09:38:45.010373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.599 09:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:58.599 09:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:58.599 09:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:58.599 09:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:58.599 09:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:58.859 09:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:58.859 09:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:59.119 nvme0n1 00:29:59.119 09:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:59.119 09:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:59.119 Running I/O for 2 seconds... 00:30:01.091 00:30:01.091 Latency(us) 00:30:01.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:01.091 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:01.091 nvme0n1 : 2.00 19768.00 77.22 0.00 0.00 6466.83 3072.00 13271.04 00:30:01.091 =================================================================================================================== 00:30:01.091 Total : 19768.00 77.22 0.00 0.00 6466.83 3072.00 13271.04 00:30:01.091 0 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:01.351 | select(.opcode=="crc32c") 00:30:01.351 | "\(.module_name) \(.executed)"' 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 882838 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 882838 ']' 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 882838 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 882838 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 882838' 00:30:01.351 killing process with pid 882838 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 882838 00:30:01.351 Received shutdown signal, test time was about 2.000000 seconds 00:30:01.351 00:30:01.351 Latency(us) 00:30:01.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:01.351 =================================================================================================================== 00:30:01.351 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:01.351 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 882838 00:30:01.610 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:01.611 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:01.611 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:01.611 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:01.611 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:01.611 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:01.611 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:01.611 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=883575 00:30:01.611 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 883575 /var/tmp/bperf.sock 00:30:01.611 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 883575 ']' 00:30:01.611 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:01.611 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:01.611 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:01.611 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:01.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:01.611 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:01.611 09:38:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:01.611 [2024-07-15 09:38:48.691920] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:30:01.611 [2024-07-15 09:38:48.691975] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883575 ] 00:30:01.611 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:01.611 Zero copy mechanism will not be used. 00:30:01.611 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.611 [2024-07-15 09:38:48.773157] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.870 [2024-07-15 09:38:48.836203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.439 09:38:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:02.439 09:38:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:30:02.440 09:38:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:02.440 09:38:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:02.440 09:38:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:02.701 09:38:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:02.701 09:38:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:02.701 nvme0n1 00:30:02.962 09:38:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:02.962 09:38:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:02.962 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:02.962 Zero copy mechanism will not be used. 00:30:02.962 Running I/O for 2 seconds... 00:30:04.875 00:30:04.875 Latency(us) 00:30:04.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.875 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:04.875 nvme0n1 : 2.00 3282.61 410.33 0.00 0.00 4871.86 856.75 9393.49 00:30:04.875 =================================================================================================================== 00:30:04.875 Total : 3282.61 410.33 0.00 0.00 4871.86 856.75 9393.49 00:30:04.875 0 00:30:04.875 09:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:04.875 09:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:04.875 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:04.875 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:04.875 | select(.opcode=="crc32c") 00:30:04.875 | "\(.module_name) \(.executed)"' 00:30:04.875 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:05.135 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 883575 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 883575 ']' 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 883575 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 883575 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 883575' 00:30:05.136 killing process with pid 883575 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 883575 00:30:05.136 Received shutdown signal, test time was about 2.000000 seconds 00:30:05.136 00:30:05.136 Latency(us) 00:30:05.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.136 =================================================================================================================== 00:30:05.136 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 883575 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=884261 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 884261 /var/tmp/bperf.sock 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 884261 ']' 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:05.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:05.136 09:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:05.397 [2024-07-15 09:38:52.364510] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:30:05.397 [2024-07-15 09:38:52.364563] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884261 ] 00:30:05.397 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.397 [2024-07-15 09:38:52.444713] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.397 [2024-07-15 09:38:52.496263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.968 09:38:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:05.968 09:38:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:30:05.968 09:38:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:05.968 09:38:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:05.968 09:38:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:06.228 09:38:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:06.228 09:38:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:06.487 nvme0n1 00:30:06.487 09:38:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:06.487 09:38:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:06.748 Running I/O for 2 seconds... 00:30:08.662 00:30:08.662 Latency(us) 00:30:08.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.662 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:08.662 nvme0n1 : 2.01 21296.35 83.19 0.00 0.00 5998.50 5789.01 15400.96 00:30:08.662 =================================================================================================================== 00:30:08.662 Total : 21296.35 83.19 0.00 0.00 5998.50 5789.01 15400.96 00:30:08.662 0 00:30:08.662 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:08.662 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:08.662 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:08.662 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:08.662 | select(.opcode=="crc32c") 00:30:08.662 | "\(.module_name) \(.executed)"' 00:30:08.662 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:08.924 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:08.924 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:08.924 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:08.924 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:08.924 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 884261 00:30:08.924 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 884261 ']' 00:30:08.924 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 884261 00:30:08.924 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:30:08.924 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:08.924 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 884261 00:30:08.924 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:08.924 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:08.924 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 884261' 00:30:08.924 killing process with pid 884261 00:30:08.924 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 884261 00:30:08.924 Received shutdown signal, test time was about 2.000000 seconds 00:30:08.924 00:30:08.924 Latency(us) 00:30:08.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.924 =================================================================================================================== 00:30:08.924 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:08.924 09:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 884261 00:30:08.924 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:08.924 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:08.924 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:08.924 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:08.924 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:08.924 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:08.924 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:08.924 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=884947 00:30:08.924 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 884947 /var/tmp/bperf.sock 00:30:08.924 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 884947 ']' 00:30:08.924 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:08.924 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:08.924 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:08.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:08.924 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:08.924 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:08.924 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:08.924 [2024-07-15 09:38:56.123799] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:30:08.925 [2024-07-15 09:38:56.123852] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884947 ] 00:30:08.925 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:08.925 Zero copy mechanism will not be used. 00:30:09.186 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.186 [2024-07-15 09:38:56.203342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.186 [2024-07-15 09:38:56.256683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.758 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:09.758 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:30:09.758 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:09.758 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:09.758 09:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:10.019 09:38:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:10.019 09:38:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:10.280 nvme0n1 00:30:10.280 09:38:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:10.280 09:38:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:10.280 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:10.280 Zero copy mechanism will not be used. 00:30:10.280 Running I/O for 2 seconds... 00:30:12.824 00:30:12.824 Latency(us) 00:30:12.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.824 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:12.824 nvme0n1 : 2.00 3357.64 419.70 0.00 0.00 4758.11 2225.49 12397.23 00:30:12.824 =================================================================================================================== 00:30:12.824 Total : 3357.64 419.70 0.00 0.00 4758.11 2225.49 12397.23 00:30:12.824 0 00:30:12.824 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:12.824 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:12.824 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:12.824 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:12.824 | select(.opcode=="crc32c") 00:30:12.824 | "\(.module_name) \(.executed)"' 00:30:12.824 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:12.824 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:12.824 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:12.824 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:12.824 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:12.824 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 884947 00:30:12.824 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 884947 ']' 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 884947 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 884947 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 884947' 00:30:12.825 killing process with pid 884947 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 884947 00:30:12.825 Received shutdown signal, test time was about 2.000000 seconds 00:30:12.825 00:30:12.825 Latency(us) 00:30:12.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.825 =================================================================================================================== 00:30:12.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 884947 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 882545 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 882545 ']' 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 882545 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 882545 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 882545' 00:30:12.825 killing process with pid 882545 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 882545 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 882545 00:30:12.825 00:30:12.825 real 0m16.098s 00:30:12.825 user 0m31.610s 00:30:12.825 sys 0m3.272s 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:12.825 ************************************ 00:30:12.825 END TEST nvmf_digest_clean 00:30:12.825 ************************************ 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:12.825 09:38:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:12.825 ************************************ 00:30:12.825 START TEST nvmf_digest_error 00:30:12.825 ************************************ 00:30:12.825 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:30:12.825 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:12.825 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:12.825 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:12.825 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:12.825 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=885660 00:30:13.086 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 885660 00:30:13.086 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:13.086 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 885660 ']' 00:30:13.086 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.086 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:13.086 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.087 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:13.087 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:13.087 [2024-07-15 09:39:00.074489] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:30:13.087 [2024-07-15 09:39:00.074540] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.087 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.087 [2024-07-15 09:39:00.149013] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.087 [2024-07-15 09:39:00.213180] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.087 [2024-07-15 09:39:00.213217] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.087 [2024-07-15 09:39:00.213224] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.087 [2024-07-15 09:39:00.213233] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.087 [2024-07-15 09:39:00.213239] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.087 [2024-07-15 09:39:00.213258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.657 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:13.657 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:30:13.657 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:13.657 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:13.657 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:13.935 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.935 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:13.935 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.935 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:13.935 [2024-07-15 09:39:00.895202] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:13.935 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.935 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:13.935 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:13.935 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.935 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:13.935 null0 00:30:13.935 [2024-07-15 09:39:00.972049] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.935 [2024-07-15 09:39:00.996228] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.935 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.935 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:13.935 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:13.935 09:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:13.935 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:13.935 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:13.935 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=886049 00:30:13.935 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 886049 /var/tmp/bperf.sock 00:30:13.935 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 886049 ']' 00:30:13.935 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:13.935 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:13.935 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:13.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:13.935 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:13.935 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:13.935 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:13.935 [2024-07-15 09:39:01.049726] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:30:13.935 [2024-07-15 09:39:01.049779] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886049 ] 00:30:13.935 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.935 [2024-07-15 09:39:01.130152] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.195 [2024-07-15 09:39:01.183926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.765 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:14.765 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:30:14.765 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:14.765 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:14.765 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:14.765 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.765 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:14.765 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.765 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:14.765 09:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:15.026 nvme0n1 00:30:15.026 09:39:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:15.026 09:39:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.026 09:39:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:15.026 09:39:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.026 09:39:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:15.026 09:39:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:15.286 Running I/O for 2 seconds... 00:30:15.286 [2024-07-15 09:39:02.294189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.286 [2024-07-15 09:39:02.294221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.286 [2024-07-15 09:39:02.294230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.286 [2024-07-15 09:39:02.306838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.287 [2024-07-15 09:39:02.306857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.287 [2024-07-15 09:39:02.306865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.287 [2024-07-15 09:39:02.317329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.287 [2024-07-15 09:39:02.317348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.287 [2024-07-15 09:39:02.317355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.287 [2024-07-15 09:39:02.331012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.287 [2024-07-15 09:39:02.331035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.287 [2024-07-15 09:39:02.331042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.287 [2024-07-15 09:39:02.343721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.287 [2024-07-15 09:39:02.343741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.287 [2024-07-15 09:39:02.343747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.287 [2024-07-15 09:39:02.355525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.287 [2024-07-15 09:39:02.355542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.287 [2024-07-15 09:39:02.355549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.287 [2024-07-15 09:39:02.368112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.287 [2024-07-15 09:39:02.368129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.287 [2024-07-15 09:39:02.368136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.287 [2024-07-15 09:39:02.381335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.287 [2024-07-15 09:39:02.381353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.287 [2024-07-15 09:39:02.381359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.287 [2024-07-15 09:39:02.394539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.287 [2024-07-15 09:39:02.394556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.287 [2024-07-15 09:39:02.394562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.287 [2024-07-15 09:39:02.404680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.287 [2024-07-15 09:39:02.404697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.287 [2024-07-15 09:39:02.404703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.287 [2024-07-15 09:39:02.418078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.287 [2024-07-15 09:39:02.418095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.287 [2024-07-15 09:39:02.418101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.287 [2024-07-15 09:39:02.431369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.287 [2024-07-15 09:39:02.431387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.287 [2024-07-15 09:39:02.431397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.287 [2024-07-15 09:39:02.443370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.287 [2024-07-15 09:39:02.443386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.287 [2024-07-15 09:39:02.443393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.287 [2024-07-15 09:39:02.454328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.287 [2024-07-15 09:39:02.454346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.287 [2024-07-15 09:39:02.454352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.287 [2024-07-15 09:39:02.468400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.287 [2024-07-15 09:39:02.468417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.287 [2024-07-15 09:39:02.468423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.287 [2024-07-15 09:39:02.481086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.287 [2024-07-15 09:39:02.481103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.287 [2024-07-15 09:39:02.481109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.548 [2024-07-15 09:39:02.492999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.548 [2024-07-15 09:39:02.493016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.548 [2024-07-15 09:39:02.493022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.548 [2024-07-15 09:39:02.503600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.548 [2024-07-15 09:39:02.503617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.548 [2024-07-15 09:39:02.503624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.548 [2024-07-15 09:39:02.516688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.548 [2024-07-15 09:39:02.516705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.548 [2024-07-15 09:39:02.516711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.548 [2024-07-15 09:39:02.529206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.548 [2024-07-15 09:39:02.529223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.548 [2024-07-15 09:39:02.529230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.548 [2024-07-15 09:39:02.542121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.548 [2024-07-15 09:39:02.542141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.548 [2024-07-15 09:39:02.542147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.548 [2024-07-15 09:39:02.554889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.548 [2024-07-15 09:39:02.554906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.548 [2024-07-15 09:39:02.554912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.548 [2024-07-15 09:39:02.567620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.549 [2024-07-15 09:39:02.567637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.549 [2024-07-15 09:39:02.567643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.549 [2024-07-15 09:39:02.578425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.549 [2024-07-15 09:39:02.578442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.549 [2024-07-15 09:39:02.578448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.549 [2024-07-15 09:39:02.591003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.549 [2024-07-15 09:39:02.591020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.549 [2024-07-15 09:39:02.591026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.549 [2024-07-15 09:39:02.604191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.549 [2024-07-15 09:39:02.604208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.549 [2024-07-15 09:39:02.604214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.549 [2024-07-15 09:39:02.616776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.549 [2024-07-15 09:39:02.616793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.549 [2024-07-15 09:39:02.616799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.549 [2024-07-15 09:39:02.628325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.549 [2024-07-15 09:39:02.628341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.549 [2024-07-15 09:39:02.628348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.549 [2024-07-15 09:39:02.641721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.549 [2024-07-15 09:39:02.641738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.549 [2024-07-15 09:39:02.641745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.549 [2024-07-15 09:39:02.653349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.549 [2024-07-15 09:39:02.653366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.549 [2024-07-15 09:39:02.653372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.549 [2024-07-15 09:39:02.665743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.549 [2024-07-15 09:39:02.665765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.549 [2024-07-15 09:39:02.665772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.549 [2024-07-15 09:39:02.678456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.549 [2024-07-15 09:39:02.678473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.549 [2024-07-15 09:39:02.678479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.549 [2024-07-15 09:39:02.688842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.549 [2024-07-15 09:39:02.688859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.549 [2024-07-15 09:39:02.688865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.549 [2024-07-15 09:39:02.702255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.549 [2024-07-15 09:39:02.702272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.549 [2024-07-15 09:39:02.702278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.549 [2024-07-15 09:39:02.714678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.549 [2024-07-15 09:39:02.714696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.549 [2024-07-15 09:39:02.714702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.549 [2024-07-15 09:39:02.726757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.549 [2024-07-15 09:39:02.726774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.549 [2024-07-15 09:39:02.726780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.549 [2024-07-15 09:39:02.739909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.549 [2024-07-15 09:39:02.739926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.549 [2024-07-15 09:39:02.739932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.751728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.751744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.751756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.763055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.763071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.763077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.774836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.774853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.774859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.787378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.787394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.787400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.799536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.799553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.799559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.811579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.811596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.811602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.824094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.824111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.824117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.836864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.836881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.836887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.849745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.849765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.849771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.861880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.861901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.861907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.873979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.873995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.874001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.886174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.886191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.886197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.897524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.897541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.897547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.909664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.909681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.909687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.923431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.923447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.923453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.935004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.935021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.935027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.947406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.947423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.947429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.959056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.959072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.959079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.971014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.971031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.971037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.983862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.983878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.983884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:02.994879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:02.994895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:02.994901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.810 [2024-07-15 09:39:03.007988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:15.810 [2024-07-15 09:39:03.008004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.810 [2024-07-15 09:39:03.008010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.071 [2024-07-15 09:39:03.020946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.071 [2024-07-15 09:39:03.020963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.071 [2024-07-15 09:39:03.020969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.071 [2024-07-15 09:39:03.030743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.071 [2024-07-15 09:39:03.030764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.071 [2024-07-15 09:39:03.030770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.072 [2024-07-15 09:39:03.044130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.072 [2024-07-15 09:39:03.044147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.072 [2024-07-15 09:39:03.044153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.072 [2024-07-15 09:39:03.057998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.072 [2024-07-15 09:39:03.058015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.072 [2024-07-15 09:39:03.058021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.072 [2024-07-15 09:39:03.070069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.072 [2024-07-15 09:39:03.070088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.072 [2024-07-15 09:39:03.070095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.072 [2024-07-15 09:39:03.082531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.072 [2024-07-15 09:39:03.082547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.072 [2024-07-15 09:39:03.082553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.072 [2024-07-15 09:39:03.093388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.072 [2024-07-15 09:39:03.093404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.072 [2024-07-15 09:39:03.093411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.072 [2024-07-15 09:39:03.105744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.072 [2024-07-15 09:39:03.105764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.072 [2024-07-15 09:39:03.105770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.072 [2024-07-15 09:39:03.118048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.072 [2024-07-15 09:39:03.118066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.072 [2024-07-15 09:39:03.118072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.072 [2024-07-15 09:39:03.131640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.072 [2024-07-15 09:39:03.131656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.072 [2024-07-15 09:39:03.131663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.072 [2024-07-15 09:39:03.143088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.072 [2024-07-15 09:39:03.143105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.072 [2024-07-15 09:39:03.143111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.072 [2024-07-15 09:39:03.155703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.072 [2024-07-15 09:39:03.155719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.072 [2024-07-15 09:39:03.155725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.072 [2024-07-15 09:39:03.167462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.072 [2024-07-15 09:39:03.167479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.072 [2024-07-15 09:39:03.167484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.072 [2024-07-15 09:39:03.180808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.072 [2024-07-15 09:39:03.180825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.072 [2024-07-15 09:39:03.180831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.072 [2024-07-15 09:39:03.192875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.072 [2024-07-15 09:39:03.192891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.072 [2024-07-15 09:39:03.192897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.072 [2024-07-15 09:39:03.205088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.072 [2024-07-15 09:39:03.205104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.072 [2024-07-15 09:39:03.205110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.072 [2024-07-15 09:39:03.216521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.072 [2024-07-15 09:39:03.216539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.072 [2024-07-15 09:39:03.216546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.072 [2024-07-15 09:39:03.228608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.072 [2024-07-15 09:39:03.228624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.072 [2024-07-15 09:39:03.228631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.072 [2024-07-15 09:39:03.242477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.072 [2024-07-15 09:39:03.242493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.072 [2024-07-15 09:39:03.242499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.072 [2024-07-15 09:39:03.254042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.072 [2024-07-15 09:39:03.254058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.072 [2024-07-15 09:39:03.254064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.072 [2024-07-15 09:39:03.265388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.072 [2024-07-15 09:39:03.265404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.072 [2024-07-15 09:39:03.265410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.278006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.278022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.278032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.291231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.291248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.291254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.302773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.302789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.302795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.314385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.314402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.314408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.327623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.327640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.327646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.339450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.339467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.339473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.351927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.351944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.351950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.364250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.364266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.364273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.377487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.377503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.377509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.389109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.389128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.389134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.400373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.400390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.400396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.412929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.412945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.412951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.424829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.424846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.424852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.437385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.437401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.437407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.450960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.450977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.450983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.463423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.463439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.463445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.475042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.475058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.475064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.486595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.486612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.486618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.498662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.498679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.498686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.511338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.511355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.511361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.333 [2024-07-15 09:39:03.524711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.333 [2024-07-15 09:39:03.524728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.333 [2024-07-15 09:39:03.524736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.593 [2024-07-15 09:39:03.536827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.593 [2024-07-15 09:39:03.536844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.593 [2024-07-15 09:39:03.536850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.593 [2024-07-15 09:39:03.546814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.593 [2024-07-15 09:39:03.546830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.593 [2024-07-15 09:39:03.546836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.593 [2024-07-15 09:39:03.560998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.593 [2024-07-15 09:39:03.561014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.593 [2024-07-15 09:39:03.561020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.593 [2024-07-15 09:39:03.573846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.593 [2024-07-15 09:39:03.573862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.593 [2024-07-15 09:39:03.573868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.593 [2024-07-15 09:39:03.583919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.593 [2024-07-15 09:39:03.583935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.593 [2024-07-15 09:39:03.583941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.593 [2024-07-15 09:39:03.597432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.593 [2024-07-15 09:39:03.597449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.593 [2024-07-15 09:39:03.597459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.593 [2024-07-15 09:39:03.609097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.593 [2024-07-15 09:39:03.609114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.593 [2024-07-15 09:39:03.609120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.593 [2024-07-15 09:39:03.623523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.593 [2024-07-15 09:39:03.623540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.593 [2024-07-15 09:39:03.623546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.593 [2024-07-15 09:39:03.634698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.593 [2024-07-15 09:39:03.634714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.593 [2024-07-15 09:39:03.634721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.593 [2024-07-15 09:39:03.647883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.593 [2024-07-15 09:39:03.647899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.593 [2024-07-15 09:39:03.647905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.593 [2024-07-15 09:39:03.659263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.593 [2024-07-15 09:39:03.659280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.593 [2024-07-15 09:39:03.659286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.593 [2024-07-15 09:39:03.671331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.593 [2024-07-15 09:39:03.671347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.593 [2024-07-15 09:39:03.671353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.593 [2024-07-15 09:39:03.684138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.593 [2024-07-15 09:39:03.684155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.593 [2024-07-15 09:39:03.684160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.593 [2024-07-15 09:39:03.696244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.593 [2024-07-15 09:39:03.696260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.593 [2024-07-15 09:39:03.696266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.593 [2024-07-15 09:39:03.708211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.593 [2024-07-15 09:39:03.708228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.593 [2024-07-15 09:39:03.708234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.594 [2024-07-15 09:39:03.721273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.594 [2024-07-15 09:39:03.721290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.594 [2024-07-15 09:39:03.721296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.594 [2024-07-15 09:39:03.733277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.594 [2024-07-15 09:39:03.733294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.594 [2024-07-15 09:39:03.733300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.594 [2024-07-15 09:39:03.745139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.594 [2024-07-15 09:39:03.745155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.594 [2024-07-15 09:39:03.745162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.594 [2024-07-15 09:39:03.758317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.594 [2024-07-15 09:39:03.758334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.594 [2024-07-15 09:39:03.758340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.594 [2024-07-15 09:39:03.769134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.594 [2024-07-15 09:39:03.769151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.594 [2024-07-15 09:39:03.769157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.594 [2024-07-15 09:39:03.781973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.594 [2024-07-15 09:39:03.781990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.594 [2024-07-15 09:39:03.781996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.854 [2024-07-15 09:39:03.794047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.854 [2024-07-15 09:39:03.794064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.854 [2024-07-15 09:39:03.794070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.854 [2024-07-15 09:39:03.806124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.854 [2024-07-15 09:39:03.806140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.854 [2024-07-15 09:39:03.806149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.854 [2024-07-15 09:39:03.819547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.854 [2024-07-15 09:39:03.819564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.854 [2024-07-15 09:39:03.819570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.854 [2024-07-15 09:39:03.831663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.854 [2024-07-15 09:39:03.831680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.854 [2024-07-15 09:39:03.831686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.854 [2024-07-15 09:39:03.843477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.854 [2024-07-15 09:39:03.843494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.854 [2024-07-15 09:39:03.843500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.854 [2024-07-15 09:39:03.855241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.854 [2024-07-15 09:39:03.855258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.855 [2024-07-15 09:39:03.855264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.855 [2024-07-15 09:39:03.867544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.855 [2024-07-15 09:39:03.867561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.855 [2024-07-15 09:39:03.867567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.855 [2024-07-15 09:39:03.880512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.855 [2024-07-15 09:39:03.880529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.855 [2024-07-15 09:39:03.880535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.855 [2024-07-15 09:39:03.892391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.855 [2024-07-15 09:39:03.892407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.855 [2024-07-15 09:39:03.892413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.855 [2024-07-15 09:39:03.904795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.855 [2024-07-15 09:39:03.904811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.855 [2024-07-15 09:39:03.904817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.855 [2024-07-15 09:39:03.917536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.855 [2024-07-15 09:39:03.917556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.855 [2024-07-15 09:39:03.917562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.855 [2024-07-15 09:39:03.927399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.855 [2024-07-15 09:39:03.927416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.855 [2024-07-15 09:39:03.927422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.855 [2024-07-15 09:39:03.941688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.855 [2024-07-15 09:39:03.941705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.855 [2024-07-15 09:39:03.941711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.855 [2024-07-15 09:39:03.953685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.855 [2024-07-15 09:39:03.953702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.855 [2024-07-15 09:39:03.953708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.855 [2024-07-15 09:39:03.967520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.855 [2024-07-15 09:39:03.967536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.855 [2024-07-15 09:39:03.967543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.855 [2024-07-15 09:39:03.979292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.855 [2024-07-15 09:39:03.979309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.855 [2024-07-15 09:39:03.979315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.855 [2024-07-15 09:39:03.990891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.855 [2024-07-15 09:39:03.990908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.855 [2024-07-15 09:39:03.990914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.855 [2024-07-15 09:39:04.002738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.855 [2024-07-15 09:39:04.002760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.855 [2024-07-15 09:39:04.002766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.855 [2024-07-15 09:39:04.016280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.855 [2024-07-15 09:39:04.016297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.855 [2024-07-15 09:39:04.016304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.855 [2024-07-15 09:39:04.028639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.855 [2024-07-15 09:39:04.028656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.855 [2024-07-15 09:39:04.028662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.855 [2024-07-15 09:39:04.039440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.855 [2024-07-15 09:39:04.039457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.855 [2024-07-15 09:39:04.039464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.855 [2024-07-15 09:39:04.052150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:16.855 [2024-07-15 09:39:04.052168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.855 [2024-07-15 09:39:04.052175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.115 [2024-07-15 09:39:04.064596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:17.115 [2024-07-15 09:39:04.064613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.115 [2024-07-15 09:39:04.064619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.115 [2024-07-15 09:39:04.076980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:17.115 [2024-07-15 09:39:04.076997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.115 [2024-07-15 09:39:04.077003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.115 [2024-07-15 09:39:04.089988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:17.115 [2024-07-15 09:39:04.090005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.115 [2024-07-15 09:39:04.090011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.115 [2024-07-15 09:39:04.101131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:17.115 [2024-07-15 09:39:04.101148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.115 [2024-07-15 09:39:04.101154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.115 [2024-07-15 09:39:04.113137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:17.115 [2024-07-15 09:39:04.113155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.115 [2024-07-15 09:39:04.113161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.115 [2024-07-15 09:39:04.127387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:17.115 [2024-07-15 09:39:04.127404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.115 [2024-07-15 09:39:04.127414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.115 [2024-07-15 09:39:04.138151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:17.115 [2024-07-15 09:39:04.138168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.115 [2024-07-15 09:39:04.138174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.115 [2024-07-15 09:39:04.151136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:17.115 [2024-07-15 09:39:04.151153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.115 [2024-07-15 09:39:04.151159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.115 [2024-07-15 09:39:04.164392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:17.115 [2024-07-15 09:39:04.164409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.115 [2024-07-15 09:39:04.164415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.115 [2024-07-15 09:39:04.175554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:17.115 [2024-07-15 09:39:04.175571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.115 [2024-07-15 09:39:04.175577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.115 [2024-07-15 09:39:04.188343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:17.115 [2024-07-15 09:39:04.188359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.115 [2024-07-15 09:39:04.188365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.115 [2024-07-15 09:39:04.199914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:17.115 [2024-07-15 09:39:04.199931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.115 [2024-07-15 09:39:04.199937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.115 [2024-07-15 09:39:04.213286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:17.115 [2024-07-15 09:39:04.213303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.116 [2024-07-15 09:39:04.213309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.116 [2024-07-15 09:39:04.225699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:17.116 [2024-07-15 09:39:04.225715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.116 [2024-07-15 09:39:04.225721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.116 [2024-07-15 09:39:04.237697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:17.116 [2024-07-15 09:39:04.237714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.116 [2024-07-15 09:39:04.237720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.116 [2024-07-15 09:39:04.248416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:17.116 [2024-07-15 09:39:04.248433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.116 [2024-07-15 09:39:04.248439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.116 [2024-07-15 09:39:04.261268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:17.116 [2024-07-15 09:39:04.261285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.116 [2024-07-15 09:39:04.261291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.116 [2024-07-15 09:39:04.274908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160ac70) 00:30:17.116 [2024-07-15 09:39:04.274925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.116 [2024-07-15 09:39:04.274931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.116 00:30:17.116 Latency(us) 00:30:17.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.116 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:17.116 nvme0n1 : 2.00 20726.56 80.96 0.00 0.00 6167.87 3153.92 16820.91 00:30:17.116 =================================================================================================================== 00:30:17.116 Total : 20726.56 80.96 0.00 0.00 6167.87 3153.92 16820.91 00:30:17.116 0 00:30:17.116 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:17.116 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:17.116 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:17.116 | .driver_specific 00:30:17.116 | .nvme_error 00:30:17.116 | .status_code 00:30:17.116 | .command_transient_transport_error' 00:30:17.116 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:17.375 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:30:17.375 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 886049 00:30:17.375 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 886049 ']' 00:30:17.375 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 886049 00:30:17.375 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:30:17.375 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:17.375 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 886049 00:30:17.375 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:17.375 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:17.375 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 886049' 00:30:17.375 killing process with pid 886049 00:30:17.375 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 886049 00:30:17.375 Received shutdown signal, test time was about 2.000000 seconds 00:30:17.375 00:30:17.375 Latency(us) 00:30:17.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.375 =================================================================================================================== 00:30:17.375 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:17.375 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 886049 00:30:17.635 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:17.635 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:17.635 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:17.635 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:17.635 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:17.635 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=886792 00:30:17.636 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 886792 /var/tmp/bperf.sock 00:30:17.636 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 886792 ']' 00:30:17.636 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:17.636 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:17.636 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:17.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:17.636 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:17.636 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:17.636 09:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:17.636 [2024-07-15 09:39:04.672872] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:30:17.636 [2024-07-15 09:39:04.672927] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886792 ] 00:30:17.636 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:17.636 Zero copy mechanism will not be used. 00:30:17.636 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.636 [2024-07-15 09:39:04.751058] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.636 [2024-07-15 09:39:04.804443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.575 09:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:18.575 09:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:30:18.575 09:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:18.575 09:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:18.575 09:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:18.575 09:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.575 09:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:18.575 09:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.575 09:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:18.575 09:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:18.837 nvme0n1 00:30:18.837 09:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:18.837 09:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.837 09:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:18.837 09:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.837 09:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:18.837 09:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:18.837 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:18.837 Zero copy mechanism will not be used. 00:30:18.837 Running I/O for 2 seconds... 00:30:18.837 [2024-07-15 09:39:05.945953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:18.837 [2024-07-15 09:39:05.945986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.837 [2024-07-15 09:39:05.945995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:18.837 [2024-07-15 09:39:05.956072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:18.837 [2024-07-15 09:39:05.956094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.837 [2024-07-15 09:39:05.956101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:18.837 [2024-07-15 09:39:05.963956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:18.837 [2024-07-15 09:39:05.963974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.837 [2024-07-15 09:39:05.963981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:18.837 [2024-07-15 09:39:05.970536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:18.837 [2024-07-15 09:39:05.970553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.837 [2024-07-15 09:39:05.970559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.837 [2024-07-15 09:39:05.976728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:18.837 [2024-07-15 09:39:05.976746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.837 [2024-07-15 09:39:05.976758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:18.837 [2024-07-15 09:39:05.985766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:18.837 [2024-07-15 09:39:05.985784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.837 [2024-07-15 09:39:05.985794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:18.837 [2024-07-15 09:39:05.993595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:18.837 [2024-07-15 09:39:05.993613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.837 [2024-07-15 09:39:05.993619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:18.837 [2024-07-15 09:39:06.002827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:18.837 [2024-07-15 09:39:06.002844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.837 [2024-07-15 09:39:06.002851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.837 [2024-07-15 09:39:06.011670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:18.837 [2024-07-15 09:39:06.011688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.837 [2024-07-15 09:39:06.011694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:18.837 [2024-07-15 09:39:06.022357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:18.837 [2024-07-15 09:39:06.022375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.837 [2024-07-15 09:39:06.022381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:18.837 [2024-07-15 09:39:06.033899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:18.837 [2024-07-15 09:39:06.033917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.837 [2024-07-15 09:39:06.033923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.098 [2024-07-15 09:39:06.043382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.098 [2024-07-15 09:39:06.043400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.098 [2024-07-15 09:39:06.043406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.098 [2024-07-15 09:39:06.053260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.098 [2024-07-15 09:39:06.053278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.098 [2024-07-15 09:39:06.053284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.098 [2024-07-15 09:39:06.062972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.098 [2024-07-15 09:39:06.062989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.098 [2024-07-15 09:39:06.062996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.098 [2024-07-15 09:39:06.070848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.098 [2024-07-15 09:39:06.070868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.098 [2024-07-15 09:39:06.070875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.098 [2024-07-15 09:39:06.080045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.098 [2024-07-15 09:39:06.080062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.098 [2024-07-15 09:39:06.080069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.098 [2024-07-15 09:39:06.089120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.098 [2024-07-15 09:39:06.089137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.098 [2024-07-15 09:39:06.089143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.098 [2024-07-15 09:39:06.098124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.098 [2024-07-15 09:39:06.098141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.098 [2024-07-15 09:39:06.098147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.098 [2024-07-15 09:39:06.108001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.098 [2024-07-15 09:39:06.108019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.108025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.116102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.116119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.116126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.123735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.123757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.123763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.134092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.134109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.134116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.141900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.141917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.141926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.150734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.150756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.150762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.159446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.159463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.159469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.169142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.169159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.169166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.177207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.177224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.177230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.185476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.185494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.185500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.195885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.195903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.195910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.203843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.203860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.203866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.213207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.213224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.213230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.223725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.223746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.223758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.233826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.233844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.233850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.244632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.244650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.244656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.255766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.255784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.255790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.265325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.265343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.265349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.274764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.274782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.274788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.284457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.284475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.284481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.099 [2024-07-15 09:39:06.294344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.099 [2024-07-15 09:39:06.294362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.099 [2024-07-15 09:39:06.294368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.360 [2024-07-15 09:39:06.303955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.360 [2024-07-15 09:39:06.303974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.360 [2024-07-15 09:39:06.303980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.360 [2024-07-15 09:39:06.313586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.360 [2024-07-15 09:39:06.313604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.360 [2024-07-15 09:39:06.313610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.360 [2024-07-15 09:39:06.324345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.360 [2024-07-15 09:39:06.324363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.360 [2024-07-15 09:39:06.324369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.360 [2024-07-15 09:39:06.331635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.360 [2024-07-15 09:39:06.331653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.360 [2024-07-15 09:39:06.331659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.360 [2024-07-15 09:39:06.341315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.360 [2024-07-15 09:39:06.341333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.360 [2024-07-15 09:39:06.341339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.360 [2024-07-15 09:39:06.349355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.360 [2024-07-15 09:39:06.349374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.360 [2024-07-15 09:39:06.349380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.360 [2024-07-15 09:39:06.358369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.360 [2024-07-15 09:39:06.358387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.360 [2024-07-15 09:39:06.358394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.360 [2024-07-15 09:39:06.369583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.360 [2024-07-15 09:39:06.369601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.369607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.361 [2024-07-15 09:39:06.380012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.361 [2024-07-15 09:39:06.380031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.380037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.361 [2024-07-15 09:39:06.390330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.361 [2024-07-15 09:39:06.390347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.390356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.361 [2024-07-15 09:39:06.401562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.361 [2024-07-15 09:39:06.401580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.401586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.361 [2024-07-15 09:39:06.412549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.361 [2024-07-15 09:39:06.412566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.412572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.361 [2024-07-15 09:39:06.422126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.361 [2024-07-15 09:39:06.422144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.422150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.361 [2024-07-15 09:39:06.432695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.361 [2024-07-15 09:39:06.432713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.432719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.361 [2024-07-15 09:39:06.441397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.361 [2024-07-15 09:39:06.441415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.441421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.361 [2024-07-15 09:39:06.450873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.361 [2024-07-15 09:39:06.450890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.450896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.361 [2024-07-15 09:39:06.460042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.361 [2024-07-15 09:39:06.460059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.460065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.361 [2024-07-15 09:39:06.468546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.361 [2024-07-15 09:39:06.468564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.468570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.361 [2024-07-15 09:39:06.479237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.361 [2024-07-15 09:39:06.479257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.479263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.361 [2024-07-15 09:39:06.489033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.361 [2024-07-15 09:39:06.489050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.489056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.361 [2024-07-15 09:39:06.499861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.361 [2024-07-15 09:39:06.499878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.499884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.361 [2024-07-15 09:39:06.509285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.361 [2024-07-15 09:39:06.509303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.509309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.361 [2024-07-15 09:39:06.519199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.361 [2024-07-15 09:39:06.519217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.519223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.361 [2024-07-15 09:39:06.527655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.361 [2024-07-15 09:39:06.527673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.527679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.361 [2024-07-15 09:39:06.537820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.361 [2024-07-15 09:39:06.537837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.537843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.361 [2024-07-15 09:39:06.547324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.361 [2024-07-15 09:39:06.547342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.547348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.361 [2024-07-15 09:39:06.555289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.361 [2024-07-15 09:39:06.555307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.361 [2024-07-15 09:39:06.555313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.622 [2024-07-15 09:39:06.564329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.622 [2024-07-15 09:39:06.564348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.622 [2024-07-15 09:39:06.564354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.622 [2024-07-15 09:39:06.574701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.622 [2024-07-15 09:39:06.574719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.622 [2024-07-15 09:39:06.574725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.622 [2024-07-15 09:39:06.584180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.622 [2024-07-15 09:39:06.584199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.622 [2024-07-15 09:39:06.584205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.622 [2024-07-15 09:39:06.594928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.622 [2024-07-15 09:39:06.594945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.622 [2024-07-15 09:39:06.594952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.622 [2024-07-15 09:39:06.604750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.622 [2024-07-15 09:39:06.604773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.622 [2024-07-15 09:39:06.604779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.622 [2024-07-15 09:39:06.613373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.622 [2024-07-15 09:39:06.613391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.622 [2024-07-15 09:39:06.613397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.622 [2024-07-15 09:39:06.623158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.622 [2024-07-15 09:39:06.623176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.622 [2024-07-15 09:39:06.623183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.622 [2024-07-15 09:39:06.631232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.622 [2024-07-15 09:39:06.631249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.622 [2024-07-15 09:39:06.631255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.622 [2024-07-15 09:39:06.640027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.622 [2024-07-15 09:39:06.640045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.622 [2024-07-15 09:39:06.640054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.622 [2024-07-15 09:39:06.649364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.622 [2024-07-15 09:39:06.649382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.622 [2024-07-15 09:39:06.649388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.622 [2024-07-15 09:39:06.658912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.622 [2024-07-15 09:39:06.658930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.622 [2024-07-15 09:39:06.658936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.622 [2024-07-15 09:39:06.668445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.622 [2024-07-15 09:39:06.668463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.622 [2024-07-15 09:39:06.668469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.622 [2024-07-15 09:39:06.677546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.622 [2024-07-15 09:39:06.677564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.622 [2024-07-15 09:39:06.677570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.622 [2024-07-15 09:39:06.689152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.622 [2024-07-15 09:39:06.689170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.622 [2024-07-15 09:39:06.689176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.622 [2024-07-15 09:39:06.699366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.622 [2024-07-15 09:39:06.699384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.622 [2024-07-15 09:39:06.699390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.622 [2024-07-15 09:39:06.708745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.622 [2024-07-15 09:39:06.708767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.622 [2024-07-15 09:39:06.708774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.623 [2024-07-15 09:39:06.719221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.623 [2024-07-15 09:39:06.719239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.623 [2024-07-15 09:39:06.719245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.623 [2024-07-15 09:39:06.729065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.623 [2024-07-15 09:39:06.729082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.623 [2024-07-15 09:39:06.729088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.623 [2024-07-15 09:39:06.736251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.623 [2024-07-15 09:39:06.736268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.623 [2024-07-15 09:39:06.736274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.623 [2024-07-15 09:39:06.746724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.623 [2024-07-15 09:39:06.746742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.623 [2024-07-15 09:39:06.746747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.623 [2024-07-15 09:39:06.758035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.623 [2024-07-15 09:39:06.758052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.623 [2024-07-15 09:39:06.758058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.623 [2024-07-15 09:39:06.768153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.623 [2024-07-15 09:39:06.768171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.623 [2024-07-15 09:39:06.768177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.623 [2024-07-15 09:39:06.776976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.623 [2024-07-15 09:39:06.776994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.623 [2024-07-15 09:39:06.777000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.623 [2024-07-15 09:39:06.786909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.623 [2024-07-15 09:39:06.786926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.623 [2024-07-15 09:39:06.786932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.623 [2024-07-15 09:39:06.794747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.623 [2024-07-15 09:39:06.794768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.623 [2024-07-15 09:39:06.794774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.623 [2024-07-15 09:39:06.801327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.623 [2024-07-15 09:39:06.801345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.623 [2024-07-15 09:39:06.801354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.623 [2024-07-15 09:39:06.809565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.623 [2024-07-15 09:39:06.809583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.623 [2024-07-15 09:39:06.809589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.821126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.821145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.821151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.827987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.828004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.828010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.837584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.837602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.837608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.850172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.850190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.850196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.862291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.862309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.862315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.873954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.873972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.873978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.883971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.883989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.883995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.892147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.892168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.892174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.898975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.898993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.898999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.908276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.908293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.908299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.917551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.917569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.917576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.925283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.925301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.925307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.932063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.932080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.932086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.938608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.938626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.938632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.946360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.946378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.946384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.952823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.952840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.952846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.958952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.958970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.958976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.965654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.965672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.965678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.971774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.971792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.971798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.978903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.978921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.978927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:06.989715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:06.989733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.884 [2024-07-15 09:39:06.989739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.884 [2024-07-15 09:39:07.000137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.884 [2024-07-15 09:39:07.000155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.885 [2024-07-15 09:39:07.000161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.885 [2024-07-15 09:39:07.010103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.885 [2024-07-15 09:39:07.010121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.885 [2024-07-15 09:39:07.010127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.885 [2024-07-15 09:39:07.020006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.885 [2024-07-15 09:39:07.020024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.885 [2024-07-15 09:39:07.020030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.885 [2024-07-15 09:39:07.030948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.885 [2024-07-15 09:39:07.030966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.885 [2024-07-15 09:39:07.030978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.885 [2024-07-15 09:39:07.040096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.885 [2024-07-15 09:39:07.040114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.885 [2024-07-15 09:39:07.040120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:19.885 [2024-07-15 09:39:07.049138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.885 [2024-07-15 09:39:07.049156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.885 [2024-07-15 09:39:07.049162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.885 [2024-07-15 09:39:07.058359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.885 [2024-07-15 09:39:07.058378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.885 [2024-07-15 09:39:07.058384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:19.885 [2024-07-15 09:39:07.067917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.885 [2024-07-15 09:39:07.067935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.885 [2024-07-15 09:39:07.067941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:19.885 [2024-07-15 09:39:07.081413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:19.885 [2024-07-15 09:39:07.081431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.885 [2024-07-15 09:39:07.081437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.146 [2024-07-15 09:39:07.094126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.146 [2024-07-15 09:39:07.094144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.146 [2024-07-15 09:39:07.094150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.146 [2024-07-15 09:39:07.107248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.146 [2024-07-15 09:39:07.107266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.146 [2024-07-15 09:39:07.107272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.146 [2024-07-15 09:39:07.118994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.146 [2024-07-15 09:39:07.119012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.146 [2024-07-15 09:39:07.119018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.146 [2024-07-15 09:39:07.129172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.146 [2024-07-15 09:39:07.129193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.146 [2024-07-15 09:39:07.129199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.146 [2024-07-15 09:39:07.139044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.146 [2024-07-15 09:39:07.139061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.146 [2024-07-15 09:39:07.139067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.146 [2024-07-15 09:39:07.147548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.146 [2024-07-15 09:39:07.147565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.146 [2024-07-15 09:39:07.147571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.156905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.156923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.156929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.167537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.167555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.167561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.174935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.174953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.174959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.184799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.184817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.184823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.194149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.194166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.194172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.203809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.203827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.203833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.213381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.213399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.213405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.221932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.221950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.221956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.231699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.231717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.231723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.243609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.243627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.243633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.252670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.252689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.252695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.262158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.262176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.262182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.269928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.269946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.269952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.279532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.279550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.279556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.289456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.289474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.289483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.299160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.299178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.299184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.308230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.308248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.308254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.319445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.319463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.319470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.328699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.328718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.328724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.147 [2024-07-15 09:39:07.338953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.147 [2024-07-15 09:39:07.338971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.147 [2024-07-15 09:39:07.338977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.347442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.347460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.347466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.355928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.355947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.355953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.365325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.365343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.365349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.373327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.373345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.373351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.383179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.383197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.383203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.392808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.392826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.392832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.402284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.402302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.402308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.413024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.413042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.413048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.421201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.421219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.421226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.430588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.430606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.430612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.442564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.442583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.442589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.452092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.452110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.452119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.457300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.457318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.457324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.463695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.463712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.463718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.472301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.472318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.472324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.482012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.482029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.482035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.491850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.491867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.491873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.501107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.501124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.501130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.512437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.512454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.512460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.524326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.524343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.524350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.534066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.534086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.534092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.543196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.543214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.543220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.552602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.552619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.552625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.563055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.563072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.563078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.573197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.573213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.573220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.582663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.582680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.582686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.408 [2024-07-15 09:39:07.594066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.408 [2024-07-15 09:39:07.594084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.408 [2024-07-15 09:39:07.594090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.409 [2024-07-15 09:39:07.604053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.409 [2024-07-15 09:39:07.604070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.409 [2024-07-15 09:39:07.604077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.670 [2024-07-15 09:39:07.614659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.670 [2024-07-15 09:39:07.614677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.670 [2024-07-15 09:39:07.614683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.670 [2024-07-15 09:39:07.622387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.670 [2024-07-15 09:39:07.622405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.670 [2024-07-15 09:39:07.622411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.670 [2024-07-15 09:39:07.632349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.670 [2024-07-15 09:39:07.632367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.670 [2024-07-15 09:39:07.632373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.670 [2024-07-15 09:39:07.641704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.670 [2024-07-15 09:39:07.641721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.670 [2024-07-15 09:39:07.641727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.670 [2024-07-15 09:39:07.650969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.670 [2024-07-15 09:39:07.650987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.670 [2024-07-15 09:39:07.650994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.670 [2024-07-15 09:39:07.658656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.670 [2024-07-15 09:39:07.658674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.670 [2024-07-15 09:39:07.658680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.670 [2024-07-15 09:39:07.666693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.670 [2024-07-15 09:39:07.666711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.670 [2024-07-15 09:39:07.666717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.670 [2024-07-15 09:39:07.676365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.670 [2024-07-15 09:39:07.676382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.670 [2024-07-15 09:39:07.676388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.670 [2024-07-15 09:39:07.685155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.670 [2024-07-15 09:39:07.685172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.670 [2024-07-15 09:39:07.685178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.670 [2024-07-15 09:39:07.693870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.693887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.693896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.671 [2024-07-15 09:39:07.703816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.703833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.703839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.671 [2024-07-15 09:39:07.714851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.714868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.714873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.671 [2024-07-15 09:39:07.724330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.724348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.724354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.671 [2024-07-15 09:39:07.732853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.732870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.732876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.671 [2024-07-15 09:39:07.743844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.743860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.743866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.671 [2024-07-15 09:39:07.752867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.752884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.752890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.671 [2024-07-15 09:39:07.760836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.760853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.760859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.671 [2024-07-15 09:39:07.770747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.770769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.770775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.671 [2024-07-15 09:39:07.779533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.779553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.779559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.671 [2024-07-15 09:39:07.788125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.788142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.788148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.671 [2024-07-15 09:39:07.796022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.796039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.796045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.671 [2024-07-15 09:39:07.806644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.806661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.806666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.671 [2024-07-15 09:39:07.816938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.816955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.816961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.671 [2024-07-15 09:39:07.825188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.825204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.825210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.671 [2024-07-15 09:39:07.832025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.832042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.832049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.671 [2024-07-15 09:39:07.841540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.841556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.841562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.671 [2024-07-15 09:39:07.851579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.851596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.851602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.671 [2024-07-15 09:39:07.860084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.860101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.860107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.671 [2024-07-15 09:39:07.866972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.671 [2024-07-15 09:39:07.866989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.671 [2024-07-15 09:39:07.866995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.932 [2024-07-15 09:39:07.875642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.932 [2024-07-15 09:39:07.875659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.933 [2024-07-15 09:39:07.875665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.933 [2024-07-15 09:39:07.883560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.933 [2024-07-15 09:39:07.883577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.933 [2024-07-15 09:39:07.883584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.933 [2024-07-15 09:39:07.892645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.933 [2024-07-15 09:39:07.892662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.933 [2024-07-15 09:39:07.892668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.933 [2024-07-15 09:39:07.901703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.933 [2024-07-15 09:39:07.901720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.933 [2024-07-15 09:39:07.901726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.933 [2024-07-15 09:39:07.911075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.933 [2024-07-15 09:39:07.911093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.933 [2024-07-15 09:39:07.911099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.933 [2024-07-15 09:39:07.919844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.933 [2024-07-15 09:39:07.919862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.933 [2024-07-15 09:39:07.919868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.933 [2024-07-15 09:39:07.932173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103a0f0) 00:30:20.933 [2024-07-15 09:39:07.932190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.933 [2024-07-15 09:39:07.932200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.933 00:30:20.933 Latency(us) 00:30:20.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.933 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:20.933 nvme0n1 : 2.00 3305.47 413.18 0.00 0.00 4837.67 1242.45 13762.56 00:30:20.933 =================================================================================================================== 00:30:20.933 Total : 3305.47 413.18 0.00 0.00 4837.67 1242.45 13762.56 00:30:20.933 0 00:30:20.933 09:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:20.933 09:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:20.933 09:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:20.933 | .driver_specific 00:30:20.933 | .nvme_error 00:30:20.933 | .status_code 00:30:20.933 | .command_transient_transport_error' 00:30:20.933 09:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:20.933 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 213 > 0 )) 00:30:20.933 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 886792 00:30:20.933 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 886792 ']' 00:30:20.933 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 886792 00:30:20.933 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:30:20.933 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:21.194 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 886792 00:30:21.194 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:21.194 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:21.194 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 886792' 00:30:21.194 killing process with pid 886792 00:30:21.194 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 886792 00:30:21.194 Received shutdown signal, test time was about 2.000000 seconds 00:30:21.194 00:30:21.194 Latency(us) 00:30:21.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.194 =================================================================================================================== 00:30:21.194 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:21.194 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 886792 00:30:21.194 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:21.194 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:21.194 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:21.194 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:21.194 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:21.194 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=887469 00:30:21.194 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 887469 /var/tmp/bperf.sock 00:30:21.194 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 887469 ']' 00:30:21.194 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:21.195 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:21.195 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:21.195 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:21.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:21.195 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:21.195 09:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:21.195 [2024-07-15 09:39:08.340267] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:30:21.195 [2024-07-15 09:39:08.340319] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid887469 ] 00:30:21.195 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.455 [2024-07-15 09:39:08.421674] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.455 [2024-07-15 09:39:08.475504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.023 09:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:22.023 09:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:30:22.023 09:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:22.023 09:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:22.282 09:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:22.282 09:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.282 09:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:22.282 09:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.282 09:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:22.282 09:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:22.541 nvme0n1 00:30:22.542 09:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:22.542 09:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.542 09:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:22.542 09:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.542 09:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:22.542 09:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:22.542 Running I/O for 2 seconds... 00:30:22.542 [2024-07-15 09:39:09.634081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e8088 00:30:22.542 [2024-07-15 09:39:09.635648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.542 [2024-07-15 09:39:09.635681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:22.542 [2024-07-15 09:39:09.644945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e0a68 00:30:22.542 [2024-07-15 09:39:09.646104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.542 [2024-07-15 09:39:09.646124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:22.542 [2024-07-15 09:39:09.657975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e2c28 00:30:22.542 [2024-07-15 09:39:09.659511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.542 [2024-07-15 09:39:09.659528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.542 [2024-07-15 09:39:09.667484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190fd208 00:30:22.542 [2024-07-15 09:39:09.668372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.542 [2024-07-15 09:39:09.668389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.542 [2024-07-15 09:39:09.680388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e9168 00:30:22.542 [2024-07-15 09:39:09.681564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.542 [2024-07-15 09:39:09.681581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:22.542 [2024-07-15 09:39:09.693398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb328 00:30:22.542 [2024-07-15 09:39:09.694948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.542 [2024-07-15 09:39:09.694964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.542 [2024-07-15 09:39:09.702874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e3d08 00:30:22.542 [2024-07-15 09:39:09.703717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.542 [2024-07-15 09:39:09.703733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.542 [2024-07-15 09:39:09.717495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e6738 00:30:22.542 [2024-07-15 09:39:09.719265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.542 [2024-07-15 09:39:09.719281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:22.542 [2024-07-15 09:39:09.728126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e27f0 00:30:22.542 [2024-07-15 09:39:09.729444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.542 [2024-07-15 09:39:09.729460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:22.825 [2024-07-15 09:39:09.741537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e27f0 00:30:22.825 [2024-07-15 09:39:09.743502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.825 [2024-07-15 09:39:09.743521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:22.825 [2024-07-15 09:39:09.751814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eff18 00:30:22.825 [2024-07-15 09:39:09.753136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.825 [2024-07-15 09:39:09.753152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.825 [2024-07-15 09:39:09.763567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f96f8 00:30:22.825 [2024-07-15 09:39:09.764894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.825 [2024-07-15 09:39:09.764909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.825 [2024-07-15 09:39:09.775302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f8618 00:30:22.825 [2024-07-15 09:39:09.776627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.825 [2024-07-15 09:39:09.776643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.825 [2024-07-15 09:39:09.787089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190fb8b8 00:30:22.825 [2024-07-15 09:39:09.788417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.825 [2024-07-15 09:39:09.788433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.825 [2024-07-15 09:39:09.798862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190fa7d8 00:30:22.825 [2024-07-15 09:39:09.800190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.825 [2024-07-15 09:39:09.800205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.825 [2024-07-15 09:39:09.810624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190fda78 00:30:22.825 [2024-07-15 09:39:09.811938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.825 [2024-07-15 09:39:09.811953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.825 [2024-07-15 09:39:09.822382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190fc998 00:30:22.825 [2024-07-15 09:39:09.823703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.825 [2024-07-15 09:39:09.823718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.825 [2024-07-15 09:39:09.834134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eaef0 00:30:22.825 [2024-07-15 09:39:09.835453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.825 [2024-07-15 09:39:09.835468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.825 [2024-07-15 09:39:09.845887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e9e10 00:30:22.825 [2024-07-15 09:39:09.847203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.825 [2024-07-15 09:39:09.847218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.825 [2024-07-15 09:39:09.857642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e8d30 00:30:22.825 [2024-07-15 09:39:09.858937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.825 [2024-07-15 09:39:09.858952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.825 [2024-07-15 09:39:09.869407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e7c50 00:30:22.825 [2024-07-15 09:39:09.870706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.825 [2024-07-15 09:39:09.870722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.825 [2024-07-15 09:39:09.881153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190ebb98 00:30:22.825 [2024-07-15 09:39:09.882478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.825 [2024-07-15 09:39:09.882493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.825 [2024-07-15 09:39:09.892914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190de8a8 00:30:22.825 [2024-07-15 09:39:09.894244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.825 [2024-07-15 09:39:09.894260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.825 [2024-07-15 09:39:09.904650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190df988 00:30:22.825 [2024-07-15 09:39:09.905972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.825 [2024-07-15 09:39:09.905988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.825 [2024-07-15 09:39:09.916417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f4b08 00:30:22.825 [2024-07-15 09:39:09.917745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.825 [2024-07-15 09:39:09.917763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.825 [2024-07-15 09:39:09.928188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f3a28 00:30:22.825 [2024-07-15 09:39:09.929526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.825 [2024-07-15 09:39:09.929541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.825 [2024-07-15 09:39:09.939954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190efae0 00:30:22.825 [2024-07-15 09:39:09.941290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.826 [2024-07-15 09:39:09.941306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.826 [2024-07-15 09:39:09.951708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f9b30 00:30:22.826 [2024-07-15 09:39:09.953023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.826 [2024-07-15 09:39:09.953039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.826 [2024-07-15 09:39:09.963451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f8a50 00:30:22.826 [2024-07-15 09:39:09.964777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.826 [2024-07-15 09:39:09.964792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.826 [2024-07-15 09:39:09.974440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f20d8 00:30:22.826 [2024-07-15 09:39:09.975754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.826 [2024-07-15 09:39:09.975769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:22.826 [2024-07-15 09:39:09.989013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e3060 00:30:22.826 [2024-07-15 09:39:09.991138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.826 [2024-07-15 09:39:09.991153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.826 [2024-07-15 09:39:09.999282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190df118 00:30:22.826 [2024-07-15 09:39:10.000776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.826 [2024-07-15 09:39:10.000792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:22.826 [2024-07-15 09:39:10.011498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e6b70 00:30:23.164 [2024-07-15 09:39:10.012943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.164 [2024-07-15 09:39:10.012961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:23.164 [2024-07-15 09:39:10.024729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e6b70 00:30:23.164 [2024-07-15 09:39:10.027001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.164 [2024-07-15 09:39:10.027071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:23.164 [2024-07-15 09:39:10.035827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f31b8 00:30:23.164 [2024-07-15 09:39:10.037462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.164 [2024-07-15 09:39:10.037477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:23.164 [2024-07-15 09:39:10.045304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190fb480 00:30:23.164 [2024-07-15 09:39:10.046283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.164 [2024-07-15 09:39:10.046302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:23.164 [2024-07-15 09:39:10.058700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f5be8 00:30:23.164 [2024-07-15 09:39:10.060325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.164 [2024-07-15 09:39:10.060339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:23.164 [2024-07-15 09:39:10.068833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190fd208 00:30:23.164 [2024-07-15 09:39:10.069792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.164 [2024-07-15 09:39:10.069806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:23.164 [2024-07-15 09:39:10.080561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190fd208 00:30:23.164 [2024-07-15 09:39:10.081491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.164 [2024-07-15 09:39:10.081507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:23.164 [2024-07-15 09:39:10.092430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190fd208 00:30:23.164 [2024-07-15 09:39:10.093408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.164 [2024-07-15 09:39:10.093430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:23.164 [2024-07-15 09:39:10.103573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190de038 00:30:23.164 [2024-07-15 09:39:10.104515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.164 [2024-07-15 09:39:10.104531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:23.164 [2024-07-15 09:39:10.116483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f9f68 00:30:23.164 [2024-07-15 09:39:10.117593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.117609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.128214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190fb048 00:30:23.165 [2024-07-15 09:39:10.129335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.129350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.139995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f6890 00:30:23.165 [2024-07-15 09:39:10.141120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.141135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.151726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e7c50 00:30:23.165 [2024-07-15 09:39:10.152836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.152851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.163449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190ebb98 00:30:23.165 [2024-07-15 09:39:10.164543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.164558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.175202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190de8a8 00:30:23.165 [2024-07-15 09:39:10.176288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.176303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.186962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f5378 00:30:23.165 [2024-07-15 09:39:10.188097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.188112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.198710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e9e10 00:30:23.165 [2024-07-15 09:39:10.199831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.199846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.210457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eaef0 00:30:23.165 [2024-07-15 09:39:10.211578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.211592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.222198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190fc998 00:30:23.165 [2024-07-15 09:39:10.223322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.223337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.233957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f5be8 00:30:23.165 [2024-07-15 09:39:10.235097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.235113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.245674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190fda78 00:30:23.165 [2024-07-15 09:39:10.246759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.246774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.257416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f0ff8 00:30:23.165 [2024-07-15 09:39:10.258542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.258557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.269173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e5658 00:30:23.165 [2024-07-15 09:39:10.270257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.270272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.280870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e3d08 00:30:23.165 [2024-07-15 09:39:10.281943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.281957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.292588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e3d08 00:30:23.165 [2024-07-15 09:39:10.293662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.293677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.304310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e3d08 00:30:23.165 [2024-07-15 09:39:10.305384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.305399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.316021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e3d08 00:30:23.165 [2024-07-15 09:39:10.317136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.317151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.327735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e3d08 00:30:23.165 [2024-07-15 09:39:10.328827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.328842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.339453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e3d08 00:30:23.165 [2024-07-15 09:39:10.340561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.340576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.350360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e5ec8 00:30:23.165 [2024-07-15 09:39:10.351458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.351476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:23.165 [2024-07-15 09:39:10.362006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e6738 00:30:23.165 [2024-07-15 09:39:10.363047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.165 [2024-07-15 09:39:10.363062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:23.426 [2024-07-15 09:39:10.374498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190ecc78 00:30:23.426 [2024-07-15 09:39:10.375582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.426 [2024-07-15 09:39:10.375596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:23.426 [2024-07-15 09:39:10.386231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190ecc78 00:30:23.426 [2024-07-15 09:39:10.387312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.387327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.399420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190fc998 00:30:23.427 [2024-07-15 09:39:10.401123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.401137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.409541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.427 [2024-07-15 09:39:10.410608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.410623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.421248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.427 [2024-07-15 09:39:10.422304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.422320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.433064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.427 [2024-07-15 09:39:10.434091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.434106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.444766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.427 [2024-07-15 09:39:10.445814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.445828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.456466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.427 [2024-07-15 09:39:10.457521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.457536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.468156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.427 [2024-07-15 09:39:10.469215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.469230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.479848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.427 [2024-07-15 09:39:10.480890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.480906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.491569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.427 [2024-07-15 09:39:10.492620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.492635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.503290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.427 [2024-07-15 09:39:10.504345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.504360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.514983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.427 [2024-07-15 09:39:10.516046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.516062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.526692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.427 [2024-07-15 09:39:10.527760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.527775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.538372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.427 [2024-07-15 09:39:10.539427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.539441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.550092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.427 [2024-07-15 09:39:10.551143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.551159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.561806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.427 [2024-07-15 09:39:10.562873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.562888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.573526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.427 [2024-07-15 09:39:10.574540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.574555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.585231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.427 [2024-07-15 09:39:10.586275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.586290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.596930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.427 [2024-07-15 09:39:10.597979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.597994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.608832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.427 [2024-07-15 09:39:10.609846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.609860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.427 [2024-07-15 09:39:10.620554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.427 [2024-07-15 09:39:10.621586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.427 [2024-07-15 09:39:10.621600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.687 [2024-07-15 09:39:10.632247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.687 [2024-07-15 09:39:10.633297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-07-15 09:39:10.633312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.687 [2024-07-15 09:39:10.643958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.687 [2024-07-15 09:39:10.644971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-07-15 09:39:10.644987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.687 [2024-07-15 09:39:10.655655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.687 [2024-07-15 09:39:10.656660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-07-15 09:39:10.656678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.687 [2024-07-15 09:39:10.667335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.687 [2024-07-15 09:39:10.668402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-07-15 09:39:10.668417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.687 [2024-07-15 09:39:10.679028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.687 [2024-07-15 09:39:10.680077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-07-15 09:39:10.680091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.687 [2024-07-15 09:39:10.690745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.687 [2024-07-15 09:39:10.691795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-07-15 09:39:10.691810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.687 [2024-07-15 09:39:10.702446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.687 [2024-07-15 09:39:10.703492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-07-15 09:39:10.703507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.687 [2024-07-15 09:39:10.714167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.687 [2024-07-15 09:39:10.715216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-07-15 09:39:10.715231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.687 [2024-07-15 09:39:10.725860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.687 [2024-07-15 09:39:10.726909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-07-15 09:39:10.726924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.687 [2024-07-15 09:39:10.737566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.687 [2024-07-15 09:39:10.738618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-07-15 09:39:10.738634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.687 [2024-07-15 09:39:10.749291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.687 [2024-07-15 09:39:10.750341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-07-15 09:39:10.750357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.687 [2024-07-15 09:39:10.761008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.687 [2024-07-15 09:39:10.762032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-07-15 09:39:10.762047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.687 [2024-07-15 09:39:10.772725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.687 [2024-07-15 09:39:10.773783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-07-15 09:39:10.773798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.687 [2024-07-15 09:39:10.784459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.687 [2024-07-15 09:39:10.785510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-07-15 09:39:10.785525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.687 [2024-07-15 09:39:10.796160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.687 [2024-07-15 09:39:10.797211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-07-15 09:39:10.797227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.688 [2024-07-15 09:39:10.807888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.688 [2024-07-15 09:39:10.808906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-07-15 09:39:10.808921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.688 [2024-07-15 09:39:10.819606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.688 [2024-07-15 09:39:10.820656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-07-15 09:39:10.820671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.688 [2024-07-15 09:39:10.831329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.688 [2024-07-15 09:39:10.832375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-07-15 09:39:10.832391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.688 [2024-07-15 09:39:10.843060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.688 [2024-07-15 09:39:10.844154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-07-15 09:39:10.844168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.688 [2024-07-15 09:39:10.854805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.688 [2024-07-15 09:39:10.855833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-07-15 09:39:10.855847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.688 [2024-07-15 09:39:10.866503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.688 [2024-07-15 09:39:10.867593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-07-15 09:39:10.867608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.688 [2024-07-15 09:39:10.878227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.688 [2024-07-15 09:39:10.879277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-07-15 09:39:10.879292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.947 [2024-07-15 09:39:10.889951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:10.890981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:10.890997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:10.901667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:10.902712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:10.902728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:10.913354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:10.914406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:10.914421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:10.925046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:10.926093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:10.926109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:10.936762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:10.937808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:10.937823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:10.948476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:10.949529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:10.949543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:10.960177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:10.961191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:10.961209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:10.971901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:10.972927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:10.972943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:10.983603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:10.984618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:10.984633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:10.995325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:10.996371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:10.996386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:11.007050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:11.008088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:11.008103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:11.018762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:11.019807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:11.019822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:11.030459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:11.031518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:11.031533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:11.042167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:11.043200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:11.043215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:11.053858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:11.054871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:11.054886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:11.065602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:11.066655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:11.066671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:11.077323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:11.078376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:11.078391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:11.089066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:11.090207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:11.090223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:11.100906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:11.101932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:11.101947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:11.112623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:11.113689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:11.113703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:11.124345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:11.125393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:11.125408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.948 [2024-07-15 09:39:11.136068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:23.948 [2024-07-15 09:39:11.137116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.948 [2024-07-15 09:39:11.137131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.209 [2024-07-15 09:39:11.147793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.148838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.148853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.159502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.160557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.160572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.171199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.172251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.172266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.182930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.183996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.184011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.194656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.195713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.195728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.206372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.207423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.207439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.218095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.219153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.219169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.229823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.230833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.230848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.241501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.242553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.242568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.253221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.254269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.254283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.264936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.265997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.266014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.276652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.277702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.277717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.288354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.289404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.289419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.300055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.301069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.301084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.311767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.312819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.312835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.323491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.324539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.324554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.335201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.336255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.336270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.346922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.347973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.347988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.358626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.359683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.359698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.371837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb760 00:30:24.210 [2024-07-15 09:39:11.373525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.373540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.382139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e1f80 00:30:24.210 [2024-07-15 09:39:11.383196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.383211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.393914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190efae0 00:30:24.210 [2024-07-15 09:39:11.394947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.394963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:24.210 [2024-07-15 09:39:11.407118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f9b30 00:30:24.210 [2024-07-15 09:39:11.408793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.210 [2024-07-15 09:39:11.408808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:24.471 [2024-07-15 09:39:11.417738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f7538 00:30:24.471 [2024-07-15 09:39:11.418939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.471 [2024-07-15 09:39:11.418953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:24.471 [2024-07-15 09:39:11.429618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e3498 00:30:24.471 [2024-07-15 09:39:11.430813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.471 [2024-07-15 09:39:11.430828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:24.471 [2024-07-15 09:39:11.443053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190fd640 00:30:24.471 [2024-07-15 09:39:11.444873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.471 [2024-07-15 09:39:11.444888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:24.471 [2024-07-15 09:39:11.453053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f1430 00:30:24.471 [2024-07-15 09:39:11.454389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.471 [2024-07-15 09:39:11.454404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:24.471 [2024-07-15 09:39:11.465575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e5220 00:30:24.471 [2024-07-15 09:39:11.466880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.471 [2024-07-15 09:39:11.466896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:24.471 [2024-07-15 09:39:11.477336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e6300 00:30:24.471 [2024-07-15 09:39:11.478678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.471 [2024-07-15 09:39:11.478694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:24.471 [2024-07-15 09:39:11.489077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190fbcf0 00:30:24.471 [2024-07-15 09:39:11.490391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.471 [2024-07-15 09:39:11.490406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:24.471 [2024-07-15 09:39:11.500845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f6cc8 00:30:24.471 [2024-07-15 09:39:11.502178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.471 [2024-07-15 09:39:11.502193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:24.471 [2024-07-15 09:39:11.512596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e0a68 00:30:24.471 [2024-07-15 09:39:11.513930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.471 [2024-07-15 09:39:11.513945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:24.471 [2024-07-15 09:39:11.524325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e3d08 00:30:24.471 [2024-07-15 09:39:11.525663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.471 [2024-07-15 09:39:11.525678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:24.471 [2024-07-15 09:39:11.536074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190e8d30 00:30:24.471 [2024-07-15 09:39:11.537420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.471 [2024-07-15 09:39:11.537434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:24.471 [2024-07-15 09:39:11.547790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f3a28 00:30:24.471 [2024-07-15 09:39:11.549128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.471 [2024-07-15 09:39:11.549142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:24.471 [2024-07-15 09:39:11.559497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190ea248 00:30:24.471 [2024-07-15 09:39:11.560835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.471 [2024-07-15 09:39:11.560850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:24.471 [2024-07-15 09:39:11.571241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190eb328 00:30:24.471 [2024-07-15 09:39:11.572579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.471 [2024-07-15 09:39:11.572597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:24.471 [2024-07-15 09:39:11.584499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190fcdd0 00:30:24.471 [2024-07-15 09:39:11.586497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.471 [2024-07-15 09:39:11.586511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:24.471 [2024-07-15 09:39:11.594784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f8e88 00:30:24.471 [2024-07-15 09:39:11.596125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.471 [2024-07-15 09:39:11.596140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:24.471 [2024-07-15 09:39:11.606718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190f0788 00:30:24.471 [2024-07-15 09:39:11.608088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.471 [2024-07-15 09:39:11.608102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:24.471 [2024-07-15 09:39:11.618438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332ac0) with pdu=0x2000190ef6a8 00:30:24.471 [2024-07-15 09:39:11.619794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.472 [2024-07-15 09:39:11.619809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:24.472 00:30:24.472 Latency(us) 00:30:24.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.472 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:24.472 nvme0n1 : 2.00 21743.27 84.93 0.00 0.00 5878.91 2102.61 14527.15 00:30:24.472 =================================================================================================================== 00:30:24.472 Total : 21743.27 84.93 0.00 0.00 5878.91 2102.61 14527.15 00:30:24.472 0 00:30:24.472 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:24.472 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:24.472 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:24.472 | .driver_specific 00:30:24.472 | .nvme_error 00:30:24.472 | .status_code 00:30:24.472 | .command_transient_transport_error' 00:30:24.472 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:24.732 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:30:24.732 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 887469 00:30:24.732 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 887469 ']' 00:30:24.732 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 887469 00:30:24.732 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:30:24.732 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:24.732 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 887469 00:30:24.733 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:24.733 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:24.733 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 887469' 00:30:24.733 killing process with pid 887469 00:30:24.733 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 887469 00:30:24.733 Received shutdown signal, test time was about 2.000000 seconds 00:30:24.733 00:30:24.733 Latency(us) 00:30:24.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.733 =================================================================================================================== 00:30:24.733 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:24.733 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 887469 00:30:24.993 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:24.993 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:24.993 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:24.993 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:24.993 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:24.993 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=888607 00:30:24.993 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 888607 /var/tmp/bperf.sock 00:30:24.993 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 888607 ']' 00:30:24.993 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:24.993 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:24.993 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:24.993 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:24.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:24.994 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:24.994 09:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:24.994 [2024-07-15 09:39:12.032553] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:30:24.994 [2024-07-15 09:39:12.032606] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid888607 ] 00:30:24.994 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:24.994 Zero copy mechanism will not be used. 00:30:24.994 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.994 [2024-07-15 09:39:12.114142] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.994 [2024-07-15 09:39:12.167781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.938 09:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:25.938 09:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:30:25.938 09:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:25.938 09:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:25.938 09:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:25.938 09:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.938 09:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:25.938 09:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.938 09:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:25.938 09:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:26.197 nvme0n1 00:30:26.197 09:39:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:26.197 09:39:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.197 09:39:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:26.197 09:39:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.197 09:39:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:26.198 09:39:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:26.198 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:26.198 Zero copy mechanism will not be used. 00:30:26.198 Running I/O for 2 seconds... 00:30:26.458 [2024-07-15 09:39:13.411972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.458 [2024-07-15 09:39:13.412350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.458 [2024-07-15 09:39:13.412379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.458 [2024-07-15 09:39:13.420563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.458 [2024-07-15 09:39:13.420930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.458 [2024-07-15 09:39:13.420952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.429176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.429527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.429546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.436647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.436968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.436986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.443181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.443528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.443545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.452064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.452387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.452405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.458530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.458889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.458907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.464729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.465087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.465105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.471234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.471546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.471564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.477855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.478178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.478195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.485016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.485321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.485339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.493814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.494169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.494186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.501597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.501950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.501967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.511356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.511706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.511724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.521040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.521366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.521383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.530738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.531089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.531106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.537425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.537779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.537796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.542644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.542864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.542880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.548933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.549262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.549279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.555119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.555208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.555224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.560598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.560904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.560922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.567854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.568181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.568198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.573005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.573350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.573370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.577461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.577670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.577687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.582611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.582941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.582958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.589061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.589365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.589382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.595996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.596330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.596349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.601812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.602023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.602039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.607814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.608170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.608188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.615977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.616191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.616207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.459 [2024-07-15 09:39:13.621451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.459 [2024-07-15 09:39:13.621661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.459 [2024-07-15 09:39:13.621677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.460 [2024-07-15 09:39:13.628237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.460 [2024-07-15 09:39:13.628318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.460 [2024-07-15 09:39:13.628333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.460 [2024-07-15 09:39:13.635044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.460 [2024-07-15 09:39:13.635348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.460 [2024-07-15 09:39:13.635365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.460 [2024-07-15 09:39:13.639819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.460 [2024-07-15 09:39:13.640166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.460 [2024-07-15 09:39:13.640183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.460 [2024-07-15 09:39:13.645161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.460 [2024-07-15 09:39:13.645370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.460 [2024-07-15 09:39:13.645386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.460 [2024-07-15 09:39:13.652331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.460 [2024-07-15 09:39:13.652658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.460 [2024-07-15 09:39:13.652675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.722 [2024-07-15 09:39:13.659153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.722 [2024-07-15 09:39:13.659467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.722 [2024-07-15 09:39:13.659485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.722 [2024-07-15 09:39:13.667017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.722 [2024-07-15 09:39:13.667343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.722 [2024-07-15 09:39:13.667360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.722 [2024-07-15 09:39:13.674013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.722 [2024-07-15 09:39:13.674319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.722 [2024-07-15 09:39:13.674336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.722 [2024-07-15 09:39:13.683099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.722 [2024-07-15 09:39:13.683412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.722 [2024-07-15 09:39:13.683432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.722 [2024-07-15 09:39:13.691832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.722 [2024-07-15 09:39:13.692183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.722 [2024-07-15 09:39:13.692200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.722 [2024-07-15 09:39:13.701168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.722 [2024-07-15 09:39:13.701500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.722 [2024-07-15 09:39:13.701517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.722 [2024-07-15 09:39:13.708284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.722 [2024-07-15 09:39:13.708596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.722 [2024-07-15 09:39:13.708613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.722 [2024-07-15 09:39:13.713688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.722 [2024-07-15 09:39:13.714003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.722 [2024-07-15 09:39:13.714020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.722 [2024-07-15 09:39:13.721332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.722 [2024-07-15 09:39:13.721670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.722 [2024-07-15 09:39:13.721687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.722 [2024-07-15 09:39:13.726503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.722 [2024-07-15 09:39:13.726714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.722 [2024-07-15 09:39:13.726730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.722 [2024-07-15 09:39:13.732028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.722 [2024-07-15 09:39:13.732354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.722 [2024-07-15 09:39:13.732371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.722 [2024-07-15 09:39:13.738073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.722 [2024-07-15 09:39:13.738368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.722 [2024-07-15 09:39:13.738386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.722 [2024-07-15 09:39:13.745469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.722 [2024-07-15 09:39:13.745724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.722 [2024-07-15 09:39:13.745740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.753043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.753386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.753404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.759495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.759805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.759823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.767703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.768014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.768031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.775150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.775482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.775499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.783505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.783730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.783747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.789235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.789556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.789573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.796904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.797229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.797246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.801656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.801871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.801888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.806306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.806516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.806532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.812055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.812361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.812379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.819131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.819466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.819484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.826775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.827083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.827100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.837705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.838049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.838067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.846207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.846515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.846532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.853294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.853641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.853658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.859195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.859509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.859526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.865430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.865792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.865812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.875075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.875306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.875322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.882843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.883176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.883193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.892196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.892552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.892569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.897397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.897740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.897762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.904629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.904939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.904957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.910228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.910566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.910583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.723 [2024-07-15 09:39:13.915440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.723 [2024-07-15 09:39:13.915744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.723 [2024-07-15 09:39:13.915765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:13.921895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:13.922217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:13.922234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:13.926804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:13.927144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:13.927162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:13.932655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:13.933001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:13.933018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:13.937948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:13.938159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:13.938176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:13.942909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:13.943227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:13.943245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:13.947916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:13.948275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:13.948292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:13.952819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:13.953172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:13.953189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:13.962121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:13.962422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:13.962440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:13.969880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:13.970187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:13.970204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:13.979405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:13.979714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:13.979732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:13.988319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:13.988404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:13.988420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:14.000225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:14.000558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:14.000576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:14.010214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:14.010305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:14.010320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:14.020295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:14.020652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:14.020669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:14.025658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:14.025969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:14.025986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:14.032220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:14.032533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:14.032550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:14.038556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:14.038863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:14.038881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:14.044665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:14.045007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:14.045024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:14.054137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:14.054479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:14.054499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:14.060733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:14.061079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:14.061096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:14.068030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:14.068359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:14.068376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:14.076638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:14.076995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:14.077012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:14.084212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:14.084554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:14.084571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:14.092125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:14.092433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:14.092450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:14.100857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:14.101169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:14.101186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.986 [2024-07-15 09:39:14.108977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.986 [2024-07-15 09:39:14.109288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.986 [2024-07-15 09:39:14.109305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.987 [2024-07-15 09:39:14.116605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.987 [2024-07-15 09:39:14.116913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.987 [2024-07-15 09:39:14.116931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.987 [2024-07-15 09:39:14.122041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.987 [2024-07-15 09:39:14.122353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.987 [2024-07-15 09:39:14.122370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.987 [2024-07-15 09:39:14.127205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.987 [2024-07-15 09:39:14.127516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.987 [2024-07-15 09:39:14.127533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.987 [2024-07-15 09:39:14.134481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.987 [2024-07-15 09:39:14.134831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.987 [2024-07-15 09:39:14.134855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.987 [2024-07-15 09:39:14.143621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.987 [2024-07-15 09:39:14.143957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.987 [2024-07-15 09:39:14.143975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.987 [2024-07-15 09:39:14.153040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.987 [2024-07-15 09:39:14.153362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.987 [2024-07-15 09:39:14.153379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.987 [2024-07-15 09:39:14.162866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.987 [2024-07-15 09:39:14.163214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.987 [2024-07-15 09:39:14.163231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.987 [2024-07-15 09:39:14.172036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.987 [2024-07-15 09:39:14.172157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.987 [2024-07-15 09:39:14.172172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.987 [2024-07-15 09:39:14.182161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:26.987 [2024-07-15 09:39:14.182268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.987 [2024-07-15 09:39:14.182282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.249 [2024-07-15 09:39:14.190321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.249 [2024-07-15 09:39:14.190663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.249 [2024-07-15 09:39:14.190682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.249 [2024-07-15 09:39:14.199165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.249 [2024-07-15 09:39:14.199484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.249 [2024-07-15 09:39:14.199500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.249 [2024-07-15 09:39:14.207883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.249 [2024-07-15 09:39:14.208222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.249 [2024-07-15 09:39:14.208239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.249 [2024-07-15 09:39:14.214882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.249 [2024-07-15 09:39:14.215218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.249 [2024-07-15 09:39:14.215235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.249 [2024-07-15 09:39:14.223041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.249 [2024-07-15 09:39:14.223385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.249 [2024-07-15 09:39:14.223402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.249 [2024-07-15 09:39:14.231735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.249 [2024-07-15 09:39:14.232069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.249 [2024-07-15 09:39:14.232086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.249 [2024-07-15 09:39:14.236978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.249 [2024-07-15 09:39:14.237301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.249 [2024-07-15 09:39:14.237318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.249 [2024-07-15 09:39:14.243065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.249 [2024-07-15 09:39:14.243362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.249 [2024-07-15 09:39:14.243378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.249 [2024-07-15 09:39:14.252882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.249 [2024-07-15 09:39:14.252952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.249 [2024-07-15 09:39:14.252967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.249 [2024-07-15 09:39:14.261744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.249 [2024-07-15 09:39:14.262098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.249 [2024-07-15 09:39:14.262115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.249 [2024-07-15 09:39:14.271000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.249 [2024-07-15 09:39:14.271337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.249 [2024-07-15 09:39:14.271353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.249 [2024-07-15 09:39:14.279607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.249 [2024-07-15 09:39:14.279942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.249 [2024-07-15 09:39:14.279959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.249 [2024-07-15 09:39:14.287951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.249 [2024-07-15 09:39:14.288266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.249 [2024-07-15 09:39:14.288283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.249 [2024-07-15 09:39:14.297050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.249 [2024-07-15 09:39:14.297312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.249 [2024-07-15 09:39:14.297328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.249 [2024-07-15 09:39:14.305826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.249 [2024-07-15 09:39:14.306151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.249 [2024-07-15 09:39:14.306168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.249 [2024-07-15 09:39:14.314216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.249 [2024-07-15 09:39:14.314532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.249 [2024-07-15 09:39:14.314548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.249 [2024-07-15 09:39:14.321241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.249 [2024-07-15 09:39:14.321580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.249 [2024-07-15 09:39:14.321597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.249 [2024-07-15 09:39:14.328742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.249 [2024-07-15 09:39:14.328826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.249 [2024-07-15 09:39:14.328841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.249 [2024-07-15 09:39:14.336853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.249 [2024-07-15 09:39:14.337122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.249 [2024-07-15 09:39:14.337139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.250 [2024-07-15 09:39:14.343951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.250 [2024-07-15 09:39:14.344277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.250 [2024-07-15 09:39:14.344293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.250 [2024-07-15 09:39:14.350425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.250 [2024-07-15 09:39:14.350650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.250 [2024-07-15 09:39:14.350666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.250 [2024-07-15 09:39:14.357696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.250 [2024-07-15 09:39:14.358020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.250 [2024-07-15 09:39:14.358036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.250 [2024-07-15 09:39:14.363839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.250 [2024-07-15 09:39:14.363944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.250 [2024-07-15 09:39:14.363959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.250 [2024-07-15 09:39:14.373244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.250 [2024-07-15 09:39:14.373660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.250 [2024-07-15 09:39:14.373677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.250 [2024-07-15 09:39:14.382629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.250 [2024-07-15 09:39:14.382972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.250 [2024-07-15 09:39:14.382989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.250 [2024-07-15 09:39:14.389161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.250 [2024-07-15 09:39:14.389257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.250 [2024-07-15 09:39:14.389272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.250 [2024-07-15 09:39:14.400248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.250 [2024-07-15 09:39:14.400588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.250 [2024-07-15 09:39:14.400608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.250 [2024-07-15 09:39:14.409343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.250 [2024-07-15 09:39:14.409663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.250 [2024-07-15 09:39:14.409680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.250 [2024-07-15 09:39:14.418742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.250 [2024-07-15 09:39:14.419079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.250 [2024-07-15 09:39:14.419095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.250 [2024-07-15 09:39:14.425931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.250 [2024-07-15 09:39:14.426276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.250 [2024-07-15 09:39:14.426293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.250 [2024-07-15 09:39:14.432253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.250 [2024-07-15 09:39:14.432622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.250 [2024-07-15 09:39:14.432639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.250 [2024-07-15 09:39:14.438001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.250 [2024-07-15 09:39:14.438325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.250 [2024-07-15 09:39:14.438341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.250 [2024-07-15 09:39:14.444638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.250 [2024-07-15 09:39:14.444968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.250 [2024-07-15 09:39:14.444985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.511 [2024-07-15 09:39:14.450514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.511 [2024-07-15 09:39:14.450849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.511 [2024-07-15 09:39:14.450865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.511 [2024-07-15 09:39:14.458157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.511 [2024-07-15 09:39:14.458500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.511 [2024-07-15 09:39:14.458517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.511 [2024-07-15 09:39:14.464603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.511 [2024-07-15 09:39:14.464933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.511 [2024-07-15 09:39:14.464950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.511 [2024-07-15 09:39:14.474614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.511 [2024-07-15 09:39:14.474924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.511 [2024-07-15 09:39:14.474941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.511 [2024-07-15 09:39:14.480401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.511 [2024-07-15 09:39:14.480660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.511 [2024-07-15 09:39:14.480675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.488781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.489135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.489151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.496428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.496756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.496772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.502849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.503172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.503188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.511286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.511606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.511622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.521356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.521703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.521719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.532564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.532799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.532816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.541781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.542127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.542144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.551050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.551383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.551400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.561399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.561735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.561757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.571518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.571730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.571746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.580069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.580427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.580443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.590551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.590910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.590927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.601498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.601904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.601921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.613437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.613778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.613794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.623500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.623842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.623864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.630362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.630698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.630715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.639772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.640175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.640191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.647858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.648171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.648187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.656378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.656707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.656724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.664195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.664531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.664548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.671941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.672283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.672300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.681298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.681647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.512 [2024-07-15 09:39:14.681664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.512 [2024-07-15 09:39:14.691140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.512 [2024-07-15 09:39:14.691529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.513 [2024-07-15 09:39:14.691545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.513 [2024-07-15 09:39:14.700847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.513 [2024-07-15 09:39:14.701169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.513 [2024-07-15 09:39:14.701186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.513 [2024-07-15 09:39:14.710151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.774 [2024-07-15 09:39:14.710370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.774 [2024-07-15 09:39:14.710387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.774 [2024-07-15 09:39:14.719781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.774 [2024-07-15 09:39:14.720112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.774 [2024-07-15 09:39:14.720129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.774 [2024-07-15 09:39:14.730296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.774 [2024-07-15 09:39:14.730381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.774 [2024-07-15 09:39:14.730395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.774 [2024-07-15 09:39:14.740941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.774 [2024-07-15 09:39:14.741275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.774 [2024-07-15 09:39:14.741291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.774 [2024-07-15 09:39:14.751654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.774 [2024-07-15 09:39:14.751971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.774 [2024-07-15 09:39:14.751988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.774 [2024-07-15 09:39:14.762045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.774 [2024-07-15 09:39:14.762381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.774 [2024-07-15 09:39:14.762397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.774 [2024-07-15 09:39:14.772429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.774 [2024-07-15 09:39:14.772768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.774 [2024-07-15 09:39:14.772785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.774 [2024-07-15 09:39:14.783297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.774 [2024-07-15 09:39:14.783641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.774 [2024-07-15 09:39:14.783658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.774 [2024-07-15 09:39:14.795096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.774 [2024-07-15 09:39:14.795433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.774 [2024-07-15 09:39:14.795449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.774 [2024-07-15 09:39:14.806525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.774 [2024-07-15 09:39:14.806595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.774 [2024-07-15 09:39:14.806610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.774 [2024-07-15 09:39:14.818760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.774 [2024-07-15 09:39:14.819111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.774 [2024-07-15 09:39:14.819128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.774 [2024-07-15 09:39:14.829865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.774 [2024-07-15 09:39:14.830225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.774 [2024-07-15 09:39:14.830241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.774 [2024-07-15 09:39:14.841598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.774 [2024-07-15 09:39:14.841943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.774 [2024-07-15 09:39:14.841959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.774 [2024-07-15 09:39:14.853893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.774 [2024-07-15 09:39:14.854249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.774 [2024-07-15 09:39:14.854266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.774 [2024-07-15 09:39:14.865462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.774 [2024-07-15 09:39:14.865809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.774 [2024-07-15 09:39:14.865826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.774 [2024-07-15 09:39:14.876931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.774 [2024-07-15 09:39:14.877269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.774 [2024-07-15 09:39:14.877285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.774 [2024-07-15 09:39:14.885221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.774 [2024-07-15 09:39:14.885560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.774 [2024-07-15 09:39:14.885579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.774 [2024-07-15 09:39:14.892649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.774 [2024-07-15 09:39:14.892982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.774 [2024-07-15 09:39:14.892999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.774 [2024-07-15 09:39:14.897730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.774 [2024-07-15 09:39:14.898048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.775 [2024-07-15 09:39:14.898064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.775 [2024-07-15 09:39:14.902671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.775 [2024-07-15 09:39:14.903017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.775 [2024-07-15 09:39:14.903033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.775 [2024-07-15 09:39:14.909623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.775 [2024-07-15 09:39:14.909840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.775 [2024-07-15 09:39:14.909856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.775 [2024-07-15 09:39:14.916333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.775 [2024-07-15 09:39:14.916668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.775 [2024-07-15 09:39:14.916684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.775 [2024-07-15 09:39:14.925310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.775 [2024-07-15 09:39:14.925384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.775 [2024-07-15 09:39:14.925398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.775 [2024-07-15 09:39:14.933272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.775 [2024-07-15 09:39:14.933598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.775 [2024-07-15 09:39:14.933615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.775 [2024-07-15 09:39:14.939371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.775 [2024-07-15 09:39:14.939692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.775 [2024-07-15 09:39:14.939709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.775 [2024-07-15 09:39:14.947089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.775 [2024-07-15 09:39:14.947412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.775 [2024-07-15 09:39:14.947428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.775 [2024-07-15 09:39:14.955117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.775 [2024-07-15 09:39:14.955435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.775 [2024-07-15 09:39:14.955451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.775 [2024-07-15 09:39:14.962521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.775 [2024-07-15 09:39:14.962838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.775 [2024-07-15 09:39:14.962856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.775 [2024-07-15 09:39:14.970022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:27.775 [2024-07-15 09:39:14.970348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.775 [2024-07-15 09:39:14.970364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.036 [2024-07-15 09:39:14.977113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.036 [2024-07-15 09:39:14.977444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.036 [2024-07-15 09:39:14.977461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.036 [2024-07-15 09:39:14.984084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.036 [2024-07-15 09:39:14.984413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.036 [2024-07-15 09:39:14.984430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.036 [2024-07-15 09:39:14.990889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.036 [2024-07-15 09:39:14.991236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.036 [2024-07-15 09:39:14.991252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.036 [2024-07-15 09:39:14.998183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.036 [2024-07-15 09:39:14.998257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:14.998272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.008427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.008739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.008765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.018170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.018500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.018517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.025940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.026266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.026282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.033612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.033955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.033972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.041409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.041621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.041638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.053686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.054035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.054052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.064662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.064983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.064999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.077084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.077421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.077438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.087693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.088033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.088050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.094981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.095332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.095348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.104453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.104773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.104789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.111190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.111405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.111422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.117576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.117912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.117928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.126163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.126493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.126509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.133944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.134281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.134298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.142819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.143133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.143150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.149397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.149732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.149748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.155186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.155601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.155618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.161647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.161967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.161983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.167382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.167707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.167723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.172400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.172619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.172636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.179608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.179820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.179836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.184412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.184621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.184638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.191532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.191885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.191902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.199898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.200212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.200229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.207562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.207888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.207905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.216743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.217074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.217094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.225804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.226155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.226171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.037 [2024-07-15 09:39:15.233369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.037 [2024-07-15 09:39:15.233678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.037 [2024-07-15 09:39:15.233695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.298 [2024-07-15 09:39:15.242072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.298 [2024-07-15 09:39:15.242415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.298 [2024-07-15 09:39:15.242432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.298 [2024-07-15 09:39:15.248687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.298 [2024-07-15 09:39:15.249030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.298 [2024-07-15 09:39:15.249047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.298 [2024-07-15 09:39:15.256461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.298 [2024-07-15 09:39:15.256780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.298 [2024-07-15 09:39:15.256796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.298 [2024-07-15 09:39:15.265666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.298 [2024-07-15 09:39:15.265882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.298 [2024-07-15 09:39:15.265899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.298 [2024-07-15 09:39:15.273092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.298 [2024-07-15 09:39:15.273429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.298 [2024-07-15 09:39:15.273446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.298 [2024-07-15 09:39:15.280800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.298 [2024-07-15 09:39:15.281154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.298 [2024-07-15 09:39:15.281170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.298 [2024-07-15 09:39:15.290713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.298 [2024-07-15 09:39:15.290807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.299 [2024-07-15 09:39:15.290822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.299 [2024-07-15 09:39:15.300321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.299 [2024-07-15 09:39:15.300643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.299 [2024-07-15 09:39:15.300660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.299 [2024-07-15 09:39:15.310267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.299 [2024-07-15 09:39:15.310353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.299 [2024-07-15 09:39:15.310368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.299 [2024-07-15 09:39:15.320885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.299 [2024-07-15 09:39:15.321220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.299 [2024-07-15 09:39:15.321237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.299 [2024-07-15 09:39:15.330263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.299 [2024-07-15 09:39:15.330579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.299 [2024-07-15 09:39:15.330596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.299 [2024-07-15 09:39:15.340170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.299 [2024-07-15 09:39:15.340513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.299 [2024-07-15 09:39:15.340530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.299 [2024-07-15 09:39:15.349911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.299 [2024-07-15 09:39:15.349994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.299 [2024-07-15 09:39:15.350008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.299 [2024-07-15 09:39:15.358939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.299 [2024-07-15 09:39:15.359258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.299 [2024-07-15 09:39:15.359275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.299 [2024-07-15 09:39:15.365624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.299 [2024-07-15 09:39:15.365949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.299 [2024-07-15 09:39:15.365966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.299 [2024-07-15 09:39:15.370925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.299 [2024-07-15 09:39:15.371250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.299 [2024-07-15 09:39:15.371266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.299 [2024-07-15 09:39:15.379114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.299 [2024-07-15 09:39:15.379440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.299 [2024-07-15 09:39:15.379457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.299 [2024-07-15 09:39:15.385992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.299 [2024-07-15 09:39:15.386348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.299 [2024-07-15 09:39:15.386365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.299 [2024-07-15 09:39:15.393760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.299 [2024-07-15 09:39:15.394082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.299 [2024-07-15 09:39:15.394099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.299 [2024-07-15 09:39:15.400937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1332bf0) with pdu=0x2000190fef90 00:30:28.299 [2024-07-15 09:39:15.401272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.299 [2024-07-15 09:39:15.401287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.299 00:30:28.299 Latency(us) 00:30:28.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.299 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:28.299 nvme0n1 : 2.00 3907.37 488.42 0.00 0.00 4089.81 1993.39 12834.13 00:30:28.299 =================================================================================================================== 00:30:28.299 Total : 3907.37 488.42 0.00 0.00 4089.81 1993.39 12834.13 00:30:28.299 0 00:30:28.299 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:28.299 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:28.299 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:28.299 | .driver_specific 00:30:28.299 | .nvme_error 00:30:28.299 | .status_code 00:30:28.299 | .command_transient_transport_error' 00:30:28.299 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:28.558 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 252 > 0 )) 00:30:28.558 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 888607 00:30:28.558 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 888607 ']' 00:30:28.558 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 888607 00:30:28.558 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:30:28.558 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:28.558 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 888607 00:30:28.558 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:28.558 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:28.558 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 888607' 00:30:28.558 killing process with pid 888607 00:30:28.558 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 888607 00:30:28.558 Received shutdown signal, test time was about 2.000000 seconds 00:30:28.558 00:30:28.558 Latency(us) 00:30:28.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.558 =================================================================================================================== 00:30:28.558 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:28.558 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 888607 00:30:28.818 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 885660 00:30:28.818 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 885660 ']' 00:30:28.818 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 885660 00:30:28.818 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:30:28.818 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:28.818 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 885660 00:30:28.818 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:28.818 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:28.818 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 885660' 00:30:28.818 killing process with pid 885660 00:30:28.818 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 885660 00:30:28.818 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 885660 00:30:28.818 00:30:28.818 real 0m15.945s 00:30:28.818 user 0m31.320s 00:30:28.818 sys 0m3.322s 00:30:28.818 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:28.818 09:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:28.818 ************************************ 00:30:28.818 END TEST nvmf_digest_error 00:30:28.818 ************************************ 00:30:28.818 09:39:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:30:28.818 09:39:16 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:28.818 09:39:16 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:28.818 09:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:28.818 09:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:30:28.818 09:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:28.818 09:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:30:28.818 09:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:28.818 09:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:28.818 rmmod nvme_tcp 00:30:29.079 rmmod nvme_fabrics 00:30:29.079 rmmod nvme_keyring 00:30:29.079 09:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:29.079 09:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:30:29.079 09:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:30:29.079 09:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 885660 ']' 00:30:29.079 09:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 885660 00:30:29.079 09:39:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 885660 ']' 00:30:29.079 09:39:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 885660 00:30:29.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (885660) - No such process 00:30:29.079 09:39:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 885660 is not found' 00:30:29.079 Process with pid 885660 is not found 00:30:29.079 09:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:29.079 09:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:29.079 09:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:29.079 09:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:29.079 09:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:29.079 09:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.079 09:39:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:29.079 09:39:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.987 09:39:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:30.987 00:30:30.987 real 0m42.784s 00:30:30.987 user 1m5.254s 00:30:30.987 sys 0m12.912s 00:30:30.987 09:39:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:30.987 09:39:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:30.987 ************************************ 00:30:30.987 END TEST nvmf_digest 00:30:30.987 ************************************ 00:30:31.247 09:39:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:31.247 09:39:18 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:30:31.247 09:39:18 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:30:31.247 09:39:18 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:30:31.247 09:39:18 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:31.247 09:39:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:31.247 09:39:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:31.247 09:39:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:31.247 ************************************ 00:30:31.247 START TEST nvmf_bdevperf 00:30:31.247 ************************************ 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:31.247 * Looking for test storage... 00:30:31.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.247 09:39:18 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:30:31.248 09:39:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:39.387 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:39.387 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:39.387 Found net devices under 0000:31:00.0: cvl_0_0 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:39.387 Found net devices under 0000:31:00.1: cvl_0_1 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:39.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:39.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:30:39.387 00:30:39.387 --- 10.0.0.2 ping statistics --- 00:30:39.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.387 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:39.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:39.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:30:39.387 00:30:39.387 --- 10.0.0.1 ping statistics --- 00:30:39.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.387 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=893978 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 893978 00:30:39.387 09:39:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:39.388 09:39:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 893978 ']' 00:30:39.388 09:39:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.388 09:39:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:39.388 09:39:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.388 09:39:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:39.388 09:39:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:39.648 [2024-07-15 09:39:26.634466] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:30:39.648 [2024-07-15 09:39:26.634530] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.648 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.648 [2024-07-15 09:39:26.728746] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:39.648 [2024-07-15 09:39:26.824921] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.648 [2024-07-15 09:39:26.824980] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.648 [2024-07-15 09:39:26.824989] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.648 [2024-07-15 09:39:26.824996] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.648 [2024-07-15 09:39:26.825002] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.648 [2024-07-15 09:39:26.825132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:39.648 [2024-07-15 09:39:26.825294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.648 [2024-07-15 09:39:26.825295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:40.218 09:39:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:40.479 [2024-07-15 09:39:27.463370] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:40.479 Malloc0 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:40.479 [2024-07-15 09:39:27.530035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:40.479 { 00:30:40.479 "params": { 00:30:40.479 "name": "Nvme$subsystem", 00:30:40.479 "trtype": "$TEST_TRANSPORT", 00:30:40.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.479 "adrfam": "ipv4", 00:30:40.479 "trsvcid": "$NVMF_PORT", 00:30:40.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.479 "hdgst": ${hdgst:-false}, 00:30:40.479 "ddgst": ${ddgst:-false} 00:30:40.479 }, 00:30:40.479 "method": "bdev_nvme_attach_controller" 00:30:40.479 } 00:30:40.479 EOF 00:30:40.479 )") 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:40.479 09:39:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:40.479 "params": { 00:30:40.479 "name": "Nvme1", 00:30:40.479 "trtype": "tcp", 00:30:40.479 "traddr": "10.0.0.2", 00:30:40.479 "adrfam": "ipv4", 00:30:40.479 "trsvcid": "4420", 00:30:40.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:40.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:40.479 "hdgst": false, 00:30:40.479 "ddgst": false 00:30:40.479 }, 00:30:40.479 "method": "bdev_nvme_attach_controller" 00:30:40.479 }' 00:30:40.480 [2024-07-15 09:39:27.584503] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:30:40.480 [2024-07-15 09:39:27.584552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894326 ] 00:30:40.480 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.480 [2024-07-15 09:39:27.649700] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.740 [2024-07-15 09:39:27.714258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.740 Running I/O for 1 seconds... 00:30:42.117 00:30:42.117 Latency(us) 00:30:42.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.117 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:42.117 Verification LBA range: start 0x0 length 0x4000 00:30:42.117 Nvme1n1 : 1.05 8759.30 34.22 0.00 0.00 13996.17 3031.04 42598.40 00:30:42.117 =================================================================================================================== 00:30:42.117 Total : 8759.30 34.22 0.00 0.00 13996.17 3031.04 42598.40 00:30:42.117 09:39:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=894610 00:30:42.117 09:39:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:42.117 09:39:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:42.117 09:39:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:42.117 09:39:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:42.117 09:39:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:42.117 09:39:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.117 09:39:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.117 { 00:30:42.117 "params": { 00:30:42.117 "name": "Nvme$subsystem", 00:30:42.117 "trtype": "$TEST_TRANSPORT", 00:30:42.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.117 "adrfam": "ipv4", 00:30:42.117 "trsvcid": "$NVMF_PORT", 00:30:42.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.117 "hdgst": ${hdgst:-false}, 00:30:42.117 "ddgst": ${ddgst:-false} 00:30:42.117 }, 00:30:42.117 "method": "bdev_nvme_attach_controller" 00:30:42.117 } 00:30:42.117 EOF 00:30:42.117 )") 00:30:42.117 09:39:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:42.117 09:39:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:42.117 09:39:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:42.117 09:39:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:42.117 "params": { 00:30:42.117 "name": "Nvme1", 00:30:42.117 "trtype": "tcp", 00:30:42.117 "traddr": "10.0.0.2", 00:30:42.117 "adrfam": "ipv4", 00:30:42.117 "trsvcid": "4420", 00:30:42.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.117 "hdgst": false, 00:30:42.117 "ddgst": false 00:30:42.117 }, 00:30:42.117 "method": "bdev_nvme_attach_controller" 00:30:42.117 }' 00:30:42.117 [2024-07-15 09:39:29.132897] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:30:42.117 [2024-07-15 09:39:29.132955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894610 ] 00:30:42.117 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.117 [2024-07-15 09:39:29.197804] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.117 [2024-07-15 09:39:29.261472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.376 Running I/O for 15 seconds... 00:30:44.916 09:39:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 893978 00:30:44.916 09:39:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:44.916 [2024-07-15 09:39:32.100608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.100650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.100672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.100681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.100692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.100700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.100709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.100716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.100727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.100736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.100746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.100855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.100868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.100878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.100888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.100896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.100907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.100918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.100929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.100945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.100956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.100969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.100984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.100996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.101007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.101014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.101023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.101030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.101039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.101046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.101057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.101064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.101075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.101082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.101092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.101101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.101111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.101120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.101130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.101139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.101149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.916 [2024-07-15 09:39:32.101159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.916 [2024-07-15 09:39:32.101169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.101988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.101999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.102007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.102018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.102027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.102038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.102047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.102056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.102065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.102075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.102084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.102094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.102101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.102110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.102118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.102127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.102134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.102144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.102151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.102160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.102168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.102177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.102184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.102193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.102200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.102209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.102216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.102226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.102233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.102242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.102249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.102259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.102267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.102277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.102284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.102293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.102299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.102309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.917 [2024-07-15 09:39:32.102316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.917 [2024-07-15 09:39:32.102326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.918 [2024-07-15 09:39:32.102333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.918 [2024-07-15 09:39:32.102349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.918 [2024-07-15 09:39:32.102365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.918 [2024-07-15 09:39:32.102383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.918 [2024-07-15 09:39:32.102400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.918 [2024-07-15 09:39:32.102416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.918 [2024-07-15 09:39:32.102433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.918 [2024-07-15 09:39:32.102449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.918 [2024-07-15 09:39:32.102978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.102986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114a940 is same with the state(5) to be set 00:30:44.918 [2024-07-15 09:39:32.102997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.918 [2024-07-15 09:39:32.103003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.918 [2024-07-15 09:39:32.103009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101808 len:8 PRP1 0x0 PRP2 0x0 00:30:44.918 [2024-07-15 09:39:32.103019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.918 [2024-07-15 09:39:32.103061] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x114a940 was disconnected and freed. reset controller. 00:30:44.918 [2024-07-15 09:39:32.106635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:44.918 [2024-07-15 09:39:32.106684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:44.918 [2024-07-15 09:39:32.107493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.918 [2024-07-15 09:39:32.107510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:44.918 [2024-07-15 09:39:32.107519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:44.918 [2024-07-15 09:39:32.107736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:44.918 [2024-07-15 09:39:32.107961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:44.918 [2024-07-15 09:39:32.107970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:44.918 [2024-07-15 09:39:32.107980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:44.918 [2024-07-15 09:39:32.111499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.180 [2024-07-15 09:39:32.120854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.180 [2024-07-15 09:39:32.121470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.180 [2024-07-15 09:39:32.121508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.180 [2024-07-15 09:39:32.121521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.180 [2024-07-15 09:39:32.121772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.180 [2024-07-15 09:39:32.121994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.180 [2024-07-15 09:39:32.122004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.180 [2024-07-15 09:39:32.122012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.180 [2024-07-15 09:39:32.125511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.180 [2024-07-15 09:39:32.134793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.180 [2024-07-15 09:39:32.135339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.180 [2024-07-15 09:39:32.135358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.180 [2024-07-15 09:39:32.135365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.180 [2024-07-15 09:39:32.135582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.180 [2024-07-15 09:39:32.135807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.180 [2024-07-15 09:39:32.135816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.180 [2024-07-15 09:39:32.135823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.180 [2024-07-15 09:39:32.139330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.180 [2024-07-15 09:39:32.148602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.180 [2024-07-15 09:39:32.149280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.180 [2024-07-15 09:39:32.149318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.180 [2024-07-15 09:39:32.149329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.180 [2024-07-15 09:39:32.149565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.180 [2024-07-15 09:39:32.149792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.180 [2024-07-15 09:39:32.149803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.180 [2024-07-15 09:39:32.149810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.180 [2024-07-15 09:39:32.153308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.180 [2024-07-15 09:39:32.162380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.180 [2024-07-15 09:39:32.163046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.180 [2024-07-15 09:39:32.163084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.180 [2024-07-15 09:39:32.163095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.180 [2024-07-15 09:39:32.163331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.180 [2024-07-15 09:39:32.163551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.180 [2024-07-15 09:39:32.163561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.180 [2024-07-15 09:39:32.163573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.180 [2024-07-15 09:39:32.167081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.180 [2024-07-15 09:39:32.176154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.180 [2024-07-15 09:39:32.176805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.180 [2024-07-15 09:39:32.176844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.180 [2024-07-15 09:39:32.176856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.180 [2024-07-15 09:39:32.177095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.180 [2024-07-15 09:39:32.177316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.180 [2024-07-15 09:39:32.177325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.180 [2024-07-15 09:39:32.177332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.180 [2024-07-15 09:39:32.180837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.180 [2024-07-15 09:39:32.189909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.180 [2024-07-15 09:39:32.190495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.180 [2024-07-15 09:39:32.190515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.180 [2024-07-15 09:39:32.190522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.180 [2024-07-15 09:39:32.190739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.180 [2024-07-15 09:39:32.190961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.180 [2024-07-15 09:39:32.190970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.180 [2024-07-15 09:39:32.190977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.180 [2024-07-15 09:39:32.194472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.180 [2024-07-15 09:39:32.203744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.180 [2024-07-15 09:39:32.204284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.180 [2024-07-15 09:39:32.204301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.180 [2024-07-15 09:39:32.204308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.181 [2024-07-15 09:39:32.204523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.181 [2024-07-15 09:39:32.204740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.181 [2024-07-15 09:39:32.204749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.181 [2024-07-15 09:39:32.204763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.181 [2024-07-15 09:39:32.208253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.181 [2024-07-15 09:39:32.217527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.181 [2024-07-15 09:39:32.218072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.181 [2024-07-15 09:39:32.218092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.181 [2024-07-15 09:39:32.218100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.181 [2024-07-15 09:39:32.218316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.181 [2024-07-15 09:39:32.218533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.181 [2024-07-15 09:39:32.218542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.181 [2024-07-15 09:39:32.218549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.181 [2024-07-15 09:39:32.222046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.181 [2024-07-15 09:39:32.231319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.181 [2024-07-15 09:39:32.231969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.181 [2024-07-15 09:39:32.232008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.181 [2024-07-15 09:39:32.232018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.181 [2024-07-15 09:39:32.232255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.181 [2024-07-15 09:39:32.232475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.181 [2024-07-15 09:39:32.232485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.181 [2024-07-15 09:39:32.232492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.181 [2024-07-15 09:39:32.235998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.181 [2024-07-15 09:39:32.245084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.181 [2024-07-15 09:39:32.245643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.181 [2024-07-15 09:39:32.245662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.181 [2024-07-15 09:39:32.245669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.181 [2024-07-15 09:39:32.245891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.181 [2024-07-15 09:39:32.246109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.181 [2024-07-15 09:39:32.246118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.181 [2024-07-15 09:39:32.246125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.181 [2024-07-15 09:39:32.249619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.181 [2024-07-15 09:39:32.258896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.181 [2024-07-15 09:39:32.259460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.181 [2024-07-15 09:39:32.259476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.181 [2024-07-15 09:39:32.259484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.181 [2024-07-15 09:39:32.259699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.181 [2024-07-15 09:39:32.259925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.181 [2024-07-15 09:39:32.259935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.181 [2024-07-15 09:39:32.259941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.181 [2024-07-15 09:39:32.263434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.181 [2024-07-15 09:39:32.272706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.181 [2024-07-15 09:39:32.273348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.181 [2024-07-15 09:39:32.273386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.181 [2024-07-15 09:39:32.273397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.181 [2024-07-15 09:39:32.273633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.181 [2024-07-15 09:39:32.273860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.181 [2024-07-15 09:39:32.273870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.181 [2024-07-15 09:39:32.273878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.181 [2024-07-15 09:39:32.277378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.181 [2024-07-15 09:39:32.286449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.181 [2024-07-15 09:39:32.286992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.181 [2024-07-15 09:39:32.287012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.181 [2024-07-15 09:39:32.287020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.181 [2024-07-15 09:39:32.287237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.181 [2024-07-15 09:39:32.287453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.181 [2024-07-15 09:39:32.287462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.181 [2024-07-15 09:39:32.287469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.181 [2024-07-15 09:39:32.290969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.181 [2024-07-15 09:39:32.300245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.181 [2024-07-15 09:39:32.300963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.181 [2024-07-15 09:39:32.301001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.181 [2024-07-15 09:39:32.301012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.181 [2024-07-15 09:39:32.301248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.181 [2024-07-15 09:39:32.301469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.181 [2024-07-15 09:39:32.301479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.181 [2024-07-15 09:39:32.301486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.181 [2024-07-15 09:39:32.304997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.181 [2024-07-15 09:39:32.314072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.181 [2024-07-15 09:39:32.314773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.181 [2024-07-15 09:39:32.314811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.181 [2024-07-15 09:39:32.314823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.181 [2024-07-15 09:39:32.315062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.181 [2024-07-15 09:39:32.315283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.181 [2024-07-15 09:39:32.315292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.181 [2024-07-15 09:39:32.315300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.181 [2024-07-15 09:39:32.318808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.181 [2024-07-15 09:39:32.327884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.181 [2024-07-15 09:39:32.328323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.181 [2024-07-15 09:39:32.328345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.181 [2024-07-15 09:39:32.328353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.181 [2024-07-15 09:39:32.328572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.181 [2024-07-15 09:39:32.328800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.181 [2024-07-15 09:39:32.328811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.181 [2024-07-15 09:39:32.328818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.181 [2024-07-15 09:39:32.332314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.181 [2024-07-15 09:39:32.341810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.181 [2024-07-15 09:39:32.342481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.181 [2024-07-15 09:39:32.342519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.181 [2024-07-15 09:39:32.342529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.181 [2024-07-15 09:39:32.342774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.181 [2024-07-15 09:39:32.342995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.181 [2024-07-15 09:39:32.343007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.181 [2024-07-15 09:39:32.343015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.181 [2024-07-15 09:39:32.346514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.181 [2024-07-15 09:39:32.355588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.181 [2024-07-15 09:39:32.356237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.181 [2024-07-15 09:39:32.356275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.182 [2024-07-15 09:39:32.356291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.182 [2024-07-15 09:39:32.356528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.182 [2024-07-15 09:39:32.356748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.182 [2024-07-15 09:39:32.356766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.182 [2024-07-15 09:39:32.356774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.182 [2024-07-15 09:39:32.360273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.182 [2024-07-15 09:39:32.369359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.182 [2024-07-15 09:39:32.370032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.182 [2024-07-15 09:39:32.370069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.182 [2024-07-15 09:39:32.370080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.182 [2024-07-15 09:39:32.370317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.182 [2024-07-15 09:39:32.370539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.182 [2024-07-15 09:39:32.370548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.182 [2024-07-15 09:39:32.370556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.182 [2024-07-15 09:39:32.374072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.445 [2024-07-15 09:39:32.383155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.445 [2024-07-15 09:39:32.383585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-07-15 09:39:32.383604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.445 [2024-07-15 09:39:32.383612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.445 [2024-07-15 09:39:32.383836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.445 [2024-07-15 09:39:32.384053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.445 [2024-07-15 09:39:32.384062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.445 [2024-07-15 09:39:32.384069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.445 [2024-07-15 09:39:32.387565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.445 [2024-07-15 09:39:32.397063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.445 [2024-07-15 09:39:32.397634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-07-15 09:39:32.397650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.445 [2024-07-15 09:39:32.397658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.445 [2024-07-15 09:39:32.397879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.445 [2024-07-15 09:39:32.398097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.445 [2024-07-15 09:39:32.398110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.445 [2024-07-15 09:39:32.398117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.445 [2024-07-15 09:39:32.401616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.445 [2024-07-15 09:39:32.410908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.445 [2024-07-15 09:39:32.411438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-07-15 09:39:32.411454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.445 [2024-07-15 09:39:32.411462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.445 [2024-07-15 09:39:32.411677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.445 [2024-07-15 09:39:32.411900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.445 [2024-07-15 09:39:32.411909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.445 [2024-07-15 09:39:32.411916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.445 [2024-07-15 09:39:32.415412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.445 [2024-07-15 09:39:32.424696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.445 [2024-07-15 09:39:32.425366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-07-15 09:39:32.425405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.445 [2024-07-15 09:39:32.425416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.445 [2024-07-15 09:39:32.425652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.445 [2024-07-15 09:39:32.425883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.445 [2024-07-15 09:39:32.425893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.445 [2024-07-15 09:39:32.425901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.445 [2024-07-15 09:39:32.429402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.445 [2024-07-15 09:39:32.438497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.445 [2024-07-15 09:39:32.439072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-07-15 09:39:32.439094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.445 [2024-07-15 09:39:32.439103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.445 [2024-07-15 09:39:32.439321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.445 [2024-07-15 09:39:32.439538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.445 [2024-07-15 09:39:32.439547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.445 [2024-07-15 09:39:32.439554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.445 [2024-07-15 09:39:32.443061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.445 [2024-07-15 09:39:32.452349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.445 [2024-07-15 09:39:32.453054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-07-15 09:39:32.453072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.445 [2024-07-15 09:39:32.453079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.445 [2024-07-15 09:39:32.453296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.445 [2024-07-15 09:39:32.453513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.445 [2024-07-15 09:39:32.453521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.445 [2024-07-15 09:39:32.453528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.445 [2024-07-15 09:39:32.457031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.445 [2024-07-15 09:39:32.466110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.445 [2024-07-15 09:39:32.466679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-07-15 09:39:32.466695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.445 [2024-07-15 09:39:32.466703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.445 [2024-07-15 09:39:32.466924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.445 [2024-07-15 09:39:32.467141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.445 [2024-07-15 09:39:32.467150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.445 [2024-07-15 09:39:32.467157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.445 [2024-07-15 09:39:32.470652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.445 [2024-07-15 09:39:32.479942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.445 [2024-07-15 09:39:32.480502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-07-15 09:39:32.480518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.445 [2024-07-15 09:39:32.480525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.445 [2024-07-15 09:39:32.480741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.445 [2024-07-15 09:39:32.480964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.445 [2024-07-15 09:39:32.480973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.445 [2024-07-15 09:39:32.480980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.445 [2024-07-15 09:39:32.484474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.445 [2024-07-15 09:39:32.493765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.445 [2024-07-15 09:39:32.494343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-07-15 09:39:32.494359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.445 [2024-07-15 09:39:32.494366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.445 [2024-07-15 09:39:32.494586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.445 [2024-07-15 09:39:32.494809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.445 [2024-07-15 09:39:32.494818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.445 [2024-07-15 09:39:32.494825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.445 [2024-07-15 09:39:32.498320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.446 [2024-07-15 09:39:32.507604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.446 [2024-07-15 09:39:32.508167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-07-15 09:39:32.508183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.446 [2024-07-15 09:39:32.508190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.446 [2024-07-15 09:39:32.508405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.446 [2024-07-15 09:39:32.508622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.446 [2024-07-15 09:39:32.508630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.446 [2024-07-15 09:39:32.508637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.446 [2024-07-15 09:39:32.512137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.446 [2024-07-15 09:39:32.521423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.446 [2024-07-15 09:39:32.521977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-07-15 09:39:32.521993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.446 [2024-07-15 09:39:32.522001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.446 [2024-07-15 09:39:32.522217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.446 [2024-07-15 09:39:32.522433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.446 [2024-07-15 09:39:32.522442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.446 [2024-07-15 09:39:32.522449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.446 [2024-07-15 09:39:32.525948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.446 [2024-07-15 09:39:32.535231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.446 [2024-07-15 09:39:32.535797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-07-15 09:39:32.535813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.446 [2024-07-15 09:39:32.535821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.446 [2024-07-15 09:39:32.536037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.446 [2024-07-15 09:39:32.536253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.446 [2024-07-15 09:39:32.536262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.446 [2024-07-15 09:39:32.536273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.446 [2024-07-15 09:39:32.539787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.446 [2024-07-15 09:39:32.549077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.446 [2024-07-15 09:39:32.549653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-07-15 09:39:32.549668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.446 [2024-07-15 09:39:32.549676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.446 [2024-07-15 09:39:32.549897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.446 [2024-07-15 09:39:32.550115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.446 [2024-07-15 09:39:32.550124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.446 [2024-07-15 09:39:32.550131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.446 [2024-07-15 09:39:32.553625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.446 [2024-07-15 09:39:32.562917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.446 [2024-07-15 09:39:32.563445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-07-15 09:39:32.563461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.446 [2024-07-15 09:39:32.563468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.446 [2024-07-15 09:39:32.563684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.446 [2024-07-15 09:39:32.563906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.446 [2024-07-15 09:39:32.563916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.446 [2024-07-15 09:39:32.563923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.446 [2024-07-15 09:39:32.567419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.446 [2024-07-15 09:39:32.576707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.446 [2024-07-15 09:39:32.577244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-07-15 09:39:32.577259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.446 [2024-07-15 09:39:32.577267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.446 [2024-07-15 09:39:32.577483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.446 [2024-07-15 09:39:32.577699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.446 [2024-07-15 09:39:32.577707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.446 [2024-07-15 09:39:32.577715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.446 [2024-07-15 09:39:32.581219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.446 [2024-07-15 09:39:32.590506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.446 [2024-07-15 09:39:32.591149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-07-15 09:39:32.591191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.446 [2024-07-15 09:39:32.591202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.446 [2024-07-15 09:39:32.591439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.446 [2024-07-15 09:39:32.591659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.446 [2024-07-15 09:39:32.591669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.446 [2024-07-15 09:39:32.591676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.446 [2024-07-15 09:39:32.595178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.446 [2024-07-15 09:39:32.604261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.446 [2024-07-15 09:39:32.604811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-07-15 09:39:32.604831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.446 [2024-07-15 09:39:32.604839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.446 [2024-07-15 09:39:32.605057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.446 [2024-07-15 09:39:32.605273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.446 [2024-07-15 09:39:32.605282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.446 [2024-07-15 09:39:32.605289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.446 [2024-07-15 09:39:32.609010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.446 [2024-07-15 09:39:32.618108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.446 [2024-07-15 09:39:32.618685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-07-15 09:39:32.618703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.446 [2024-07-15 09:39:32.618711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.446 [2024-07-15 09:39:32.618934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.446 [2024-07-15 09:39:32.619152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.446 [2024-07-15 09:39:32.619161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.446 [2024-07-15 09:39:32.619168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.446 [2024-07-15 09:39:32.622663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.446 [2024-07-15 09:39:32.631953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.446 [2024-07-15 09:39:32.632483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-07-15 09:39:32.632499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.446 [2024-07-15 09:39:32.632507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.446 [2024-07-15 09:39:32.632722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.446 [2024-07-15 09:39:32.632953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.446 [2024-07-15 09:39:32.632962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.446 [2024-07-15 09:39:32.632969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.446 [2024-07-15 09:39:32.636464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.708 [2024-07-15 09:39:32.645764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.709 [2024-07-15 09:39:32.646436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.709 [2024-07-15 09:39:32.646474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.709 [2024-07-15 09:39:32.646484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.709 [2024-07-15 09:39:32.646721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.709 [2024-07-15 09:39:32.646952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.709 [2024-07-15 09:39:32.646962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.709 [2024-07-15 09:39:32.646970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.709 [2024-07-15 09:39:32.650472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.709 [2024-07-15 09:39:32.659557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.709 [2024-07-15 09:39:32.660110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.709 [2024-07-15 09:39:32.660129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.709 [2024-07-15 09:39:32.660137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.709 [2024-07-15 09:39:32.660354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.709 [2024-07-15 09:39:32.660571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.709 [2024-07-15 09:39:32.660579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.709 [2024-07-15 09:39:32.660586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.709 [2024-07-15 09:39:32.664090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.709 [2024-07-15 09:39:32.673375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.709 [2024-07-15 09:39:32.673763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.709 [2024-07-15 09:39:32.673782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.709 [2024-07-15 09:39:32.673790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.709 [2024-07-15 09:39:32.674007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.709 [2024-07-15 09:39:32.674224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.709 [2024-07-15 09:39:32.674233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.709 [2024-07-15 09:39:32.674240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.709 [2024-07-15 09:39:32.677742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.709 [2024-07-15 09:39:32.687240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.709 [2024-07-15 09:39:32.687851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.709 [2024-07-15 09:39:32.687889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.709 [2024-07-15 09:39:32.687901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.709 [2024-07-15 09:39:32.688140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.709 [2024-07-15 09:39:32.688360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.709 [2024-07-15 09:39:32.688369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.709 [2024-07-15 09:39:32.688377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.709 [2024-07-15 09:39:32.691885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.709 [2024-07-15 09:39:32.701158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.709 [2024-07-15 09:39:32.701743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.709 [2024-07-15 09:39:32.701768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.709 [2024-07-15 09:39:32.701776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.709 [2024-07-15 09:39:32.701992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.709 [2024-07-15 09:39:32.702209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.709 [2024-07-15 09:39:32.702218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.709 [2024-07-15 09:39:32.702225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.709 [2024-07-15 09:39:32.705719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.709 [2024-07-15 09:39:32.715007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.709 [2024-07-15 09:39:32.715674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.709 [2024-07-15 09:39:32.715712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.709 [2024-07-15 09:39:32.715723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.709 [2024-07-15 09:39:32.715968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.709 [2024-07-15 09:39:32.716189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.709 [2024-07-15 09:39:32.716199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.709 [2024-07-15 09:39:32.716206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.709 [2024-07-15 09:39:32.719709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.709 [2024-07-15 09:39:32.728802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.709 [2024-07-15 09:39:32.729377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.709 [2024-07-15 09:39:32.729396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.709 [2024-07-15 09:39:32.729408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.709 [2024-07-15 09:39:32.729626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.709 [2024-07-15 09:39:32.729851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.709 [2024-07-15 09:39:32.729861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.709 [2024-07-15 09:39:32.729868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.709 [2024-07-15 09:39:32.733365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.709 [2024-07-15 09:39:32.742659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.709 [2024-07-15 09:39:32.743184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.709 [2024-07-15 09:39:32.743222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.709 [2024-07-15 09:39:32.743232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.709 [2024-07-15 09:39:32.743468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.709 [2024-07-15 09:39:32.743689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.709 [2024-07-15 09:39:32.743698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.709 [2024-07-15 09:39:32.743706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.709 [2024-07-15 09:39:32.747219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.709 [2024-07-15 09:39:32.756512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.709 [2024-07-15 09:39:32.757111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.709 [2024-07-15 09:39:32.757131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.709 [2024-07-15 09:39:32.757139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.709 [2024-07-15 09:39:32.757355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.709 [2024-07-15 09:39:32.757572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.709 [2024-07-15 09:39:32.757581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.709 [2024-07-15 09:39:32.757588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.709 [2024-07-15 09:39:32.761091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.709 [2024-07-15 09:39:32.770369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.709 [2024-07-15 09:39:32.770987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.709 [2024-07-15 09:39:32.771025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.709 [2024-07-15 09:39:32.771035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.709 [2024-07-15 09:39:32.771272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.709 [2024-07-15 09:39:32.771493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.709 [2024-07-15 09:39:32.771507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.709 [2024-07-15 09:39:32.771514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.709 [2024-07-15 09:39:32.775017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.709 [2024-07-15 09:39:32.784301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.709 [2024-07-15 09:39:32.784975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.709 [2024-07-15 09:39:32.785013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.709 [2024-07-15 09:39:32.785024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.709 [2024-07-15 09:39:32.785260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.709 [2024-07-15 09:39:32.785481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.709 [2024-07-15 09:39:32.785490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.709 [2024-07-15 09:39:32.785498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.710 [2024-07-15 09:39:32.789009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.710 [2024-07-15 09:39:32.798091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.710 [2024-07-15 09:39:32.798749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.710 [2024-07-15 09:39:32.798794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.710 [2024-07-15 09:39:32.798805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.710 [2024-07-15 09:39:32.799041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.710 [2024-07-15 09:39:32.799262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.710 [2024-07-15 09:39:32.799272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.710 [2024-07-15 09:39:32.799279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.710 [2024-07-15 09:39:32.802788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.710 [2024-07-15 09:39:32.811872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.710 [2024-07-15 09:39:32.812456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.710 [2024-07-15 09:39:32.812475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.710 [2024-07-15 09:39:32.812483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.710 [2024-07-15 09:39:32.812699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.710 [2024-07-15 09:39:32.812923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.710 [2024-07-15 09:39:32.812932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.710 [2024-07-15 09:39:32.812939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.710 [2024-07-15 09:39:32.816434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.710 [2024-07-15 09:39:32.825715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.710 [2024-07-15 09:39:32.826383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.710 [2024-07-15 09:39:32.826421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.710 [2024-07-15 09:39:32.826431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.710 [2024-07-15 09:39:32.826668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.710 [2024-07-15 09:39:32.826898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.710 [2024-07-15 09:39:32.826909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.710 [2024-07-15 09:39:32.826916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.710 [2024-07-15 09:39:32.830416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.710 [2024-07-15 09:39:32.839499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.710 [2024-07-15 09:39:32.840147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.710 [2024-07-15 09:39:32.840186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.710 [2024-07-15 09:39:32.840196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.710 [2024-07-15 09:39:32.840433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.710 [2024-07-15 09:39:32.840653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.710 [2024-07-15 09:39:32.840663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.710 [2024-07-15 09:39:32.840670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.710 [2024-07-15 09:39:32.844179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.710 [2024-07-15 09:39:32.853245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.710 [2024-07-15 09:39:32.853948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.710 [2024-07-15 09:39:32.853986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.710 [2024-07-15 09:39:32.853998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.710 [2024-07-15 09:39:32.854234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.710 [2024-07-15 09:39:32.854455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.710 [2024-07-15 09:39:32.854464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.710 [2024-07-15 09:39:32.854472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.710 [2024-07-15 09:39:32.857991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.710 [2024-07-15 09:39:32.867062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.710 [2024-07-15 09:39:32.867723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.710 [2024-07-15 09:39:32.867768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.710 [2024-07-15 09:39:32.867779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.710 [2024-07-15 09:39:32.868020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.710 [2024-07-15 09:39:32.868240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.710 [2024-07-15 09:39:32.868249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.710 [2024-07-15 09:39:32.868257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.710 [2024-07-15 09:39:32.871762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.710 [2024-07-15 09:39:32.880843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.710 [2024-07-15 09:39:32.881535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.710 [2024-07-15 09:39:32.881573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.710 [2024-07-15 09:39:32.881584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.710 [2024-07-15 09:39:32.881829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.710 [2024-07-15 09:39:32.882050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.710 [2024-07-15 09:39:32.882060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.710 [2024-07-15 09:39:32.882068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.710 [2024-07-15 09:39:32.885565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.710 [2024-07-15 09:39:32.894640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.710 [2024-07-15 09:39:32.895320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.710 [2024-07-15 09:39:32.895358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.710 [2024-07-15 09:39:32.895368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.710 [2024-07-15 09:39:32.895605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.710 [2024-07-15 09:39:32.895836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.710 [2024-07-15 09:39:32.895846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.710 [2024-07-15 09:39:32.895854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.710 [2024-07-15 09:39:32.899354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.972 [2024-07-15 09:39:32.908434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.972 [2024-07-15 09:39:32.909067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.972 [2024-07-15 09:39:32.909105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.972 [2024-07-15 09:39:32.909116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.972 [2024-07-15 09:39:32.909352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.972 [2024-07-15 09:39:32.909573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.972 [2024-07-15 09:39:32.909583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.972 [2024-07-15 09:39:32.909595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.972 [2024-07-15 09:39:32.913107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.972 [2024-07-15 09:39:32.922187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.972 [2024-07-15 09:39:32.922848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.972 [2024-07-15 09:39:32.922885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.972 [2024-07-15 09:39:32.922895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.972 [2024-07-15 09:39:32.923132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.973 [2024-07-15 09:39:32.923352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.973 [2024-07-15 09:39:32.923363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.973 [2024-07-15 09:39:32.923370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.973 [2024-07-15 09:39:32.926881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.973 [2024-07-15 09:39:32.935950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.973 [2024-07-15 09:39:32.936608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.973 [2024-07-15 09:39:32.936646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.973 [2024-07-15 09:39:32.936656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.973 [2024-07-15 09:39:32.936911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.973 [2024-07-15 09:39:32.937133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.973 [2024-07-15 09:39:32.937143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.973 [2024-07-15 09:39:32.937150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.973 [2024-07-15 09:39:32.940649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.973 [2024-07-15 09:39:32.949717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.973 [2024-07-15 09:39:32.950400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.973 [2024-07-15 09:39:32.950438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.973 [2024-07-15 09:39:32.950449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.973 [2024-07-15 09:39:32.950685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.973 [2024-07-15 09:39:32.950917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.973 [2024-07-15 09:39:32.950927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.973 [2024-07-15 09:39:32.950934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.973 [2024-07-15 09:39:32.954433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.973 [2024-07-15 09:39:32.963500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.973 [2024-07-15 09:39:32.964172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.973 [2024-07-15 09:39:32.964214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.973 [2024-07-15 09:39:32.964226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.973 [2024-07-15 09:39:32.964463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.973 [2024-07-15 09:39:32.964684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.973 [2024-07-15 09:39:32.964694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.973 [2024-07-15 09:39:32.964701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.973 [2024-07-15 09:39:32.968213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.973 [2024-07-15 09:39:32.977281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.973 [2024-07-15 09:39:32.977819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.973 [2024-07-15 09:39:32.977840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.973 [2024-07-15 09:39:32.977848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.973 [2024-07-15 09:39:32.978065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.973 [2024-07-15 09:39:32.978281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.973 [2024-07-15 09:39:32.978291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.973 [2024-07-15 09:39:32.978298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.973 [2024-07-15 09:39:32.981797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.973 [2024-07-15 09:39:32.991070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.973 [2024-07-15 09:39:32.991731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.973 [2024-07-15 09:39:32.991776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.973 [2024-07-15 09:39:32.991787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.973 [2024-07-15 09:39:32.992023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.973 [2024-07-15 09:39:32.992243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.973 [2024-07-15 09:39:32.992253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.973 [2024-07-15 09:39:32.992260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.973 [2024-07-15 09:39:32.995762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.973 [2024-07-15 09:39:33.004825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.973 [2024-07-15 09:39:33.005502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.973 [2024-07-15 09:39:33.005539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.973 [2024-07-15 09:39:33.005550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.973 [2024-07-15 09:39:33.005797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.973 [2024-07-15 09:39:33.006023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.973 [2024-07-15 09:39:33.006033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.973 [2024-07-15 09:39:33.006040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.973 [2024-07-15 09:39:33.009538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.973 [2024-07-15 09:39:33.018607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.973 [2024-07-15 09:39:33.019307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.973 [2024-07-15 09:39:33.019345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.973 [2024-07-15 09:39:33.019356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.973 [2024-07-15 09:39:33.019592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.973 [2024-07-15 09:39:33.019823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.973 [2024-07-15 09:39:33.019833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.973 [2024-07-15 09:39:33.019840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.973 [2024-07-15 09:39:33.023338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.973 [2024-07-15 09:39:33.032404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.973 [2024-07-15 09:39:33.033085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.973 [2024-07-15 09:39:33.033123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.973 [2024-07-15 09:39:33.033134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.973 [2024-07-15 09:39:33.033370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.973 [2024-07-15 09:39:33.033591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.973 [2024-07-15 09:39:33.033600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.973 [2024-07-15 09:39:33.033608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.973 [2024-07-15 09:39:33.037123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.973 [2024-07-15 09:39:33.046193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.973 [2024-07-15 09:39:33.046856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.973 [2024-07-15 09:39:33.046894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.973 [2024-07-15 09:39:33.046905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.973 [2024-07-15 09:39:33.047145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.973 [2024-07-15 09:39:33.047365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.973 [2024-07-15 09:39:33.047376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.973 [2024-07-15 09:39:33.047384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.973 [2024-07-15 09:39:33.050898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.973 [2024-07-15 09:39:33.059968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.973 [2024-07-15 09:39:33.060604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.973 [2024-07-15 09:39:33.060641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.973 [2024-07-15 09:39:33.060652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.973 [2024-07-15 09:39:33.060898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.973 [2024-07-15 09:39:33.061120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.973 [2024-07-15 09:39:33.061129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.973 [2024-07-15 09:39:33.061136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.973 [2024-07-15 09:39:33.064632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.973 [2024-07-15 09:39:33.073706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.973 [2024-07-15 09:39:33.074386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.973 [2024-07-15 09:39:33.074424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.974 [2024-07-15 09:39:33.074435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.974 [2024-07-15 09:39:33.074671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.974 [2024-07-15 09:39:33.074902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.974 [2024-07-15 09:39:33.074912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.974 [2024-07-15 09:39:33.074919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.974 [2024-07-15 09:39:33.078418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.974 [2024-07-15 09:39:33.087487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.974 [2024-07-15 09:39:33.088016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.974 [2024-07-15 09:39:33.088054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.974 [2024-07-15 09:39:33.088065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.974 [2024-07-15 09:39:33.088301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.974 [2024-07-15 09:39:33.088521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.974 [2024-07-15 09:39:33.088531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.974 [2024-07-15 09:39:33.088538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.974 [2024-07-15 09:39:33.092045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.974 [2024-07-15 09:39:33.101325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.974 [2024-07-15 09:39:33.101901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.974 [2024-07-15 09:39:33.101941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.974 [2024-07-15 09:39:33.101957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.974 [2024-07-15 09:39:33.102195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.974 [2024-07-15 09:39:33.102417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.974 [2024-07-15 09:39:33.102427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.974 [2024-07-15 09:39:33.102435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.974 [2024-07-15 09:39:33.105944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.974 [2024-07-15 09:39:33.115218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.974 [2024-07-15 09:39:33.115901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.974 [2024-07-15 09:39:33.115939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.974 [2024-07-15 09:39:33.115949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.974 [2024-07-15 09:39:33.116185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.974 [2024-07-15 09:39:33.116406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.974 [2024-07-15 09:39:33.116415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.974 [2024-07-15 09:39:33.116423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.974 [2024-07-15 09:39:33.119929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.974 [2024-07-15 09:39:33.129012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.974 [2024-07-15 09:39:33.129692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.974 [2024-07-15 09:39:33.129730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.974 [2024-07-15 09:39:33.129742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.974 [2024-07-15 09:39:33.129988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.974 [2024-07-15 09:39:33.130210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.974 [2024-07-15 09:39:33.130219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.974 [2024-07-15 09:39:33.130226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.974 [2024-07-15 09:39:33.133813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.974 [2024-07-15 09:39:33.142907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.974 [2024-07-15 09:39:33.143605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.974 [2024-07-15 09:39:33.143642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.974 [2024-07-15 09:39:33.143653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.974 [2024-07-15 09:39:33.143900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.974 [2024-07-15 09:39:33.144121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.974 [2024-07-15 09:39:33.144135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.974 [2024-07-15 09:39:33.144142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.974 [2024-07-15 09:39:33.147640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.974 [2024-07-15 09:39:33.156709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.974 [2024-07-15 09:39:33.157385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.974 [2024-07-15 09:39:33.157424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:45.974 [2024-07-15 09:39:33.157434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:45.974 [2024-07-15 09:39:33.157670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:45.974 [2024-07-15 09:39:33.157901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.974 [2024-07-15 09:39:33.157912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.974 [2024-07-15 09:39:33.157919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.974 [2024-07-15 09:39:33.161418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.974 [2024-07-15 09:39:33.170494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.235 [2024-07-15 09:39:33.171181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.235 [2024-07-15 09:39:33.171219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.235 [2024-07-15 09:39:33.171230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.235 [2024-07-15 09:39:33.171466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.235 [2024-07-15 09:39:33.171686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.235 [2024-07-15 09:39:33.171695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.235 [2024-07-15 09:39:33.171703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.235 [2024-07-15 09:39:33.175210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.235 [2024-07-15 09:39:33.184281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.235 [2024-07-15 09:39:33.184942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.235 [2024-07-15 09:39:33.184980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.235 [2024-07-15 09:39:33.184991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.235 [2024-07-15 09:39:33.185228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.235 [2024-07-15 09:39:33.185448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.235 [2024-07-15 09:39:33.185457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.235 [2024-07-15 09:39:33.185465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.235 [2024-07-15 09:39:33.188973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.235 [2024-07-15 09:39:33.198044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.235 [2024-07-15 09:39:33.198711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.235 [2024-07-15 09:39:33.198748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.235 [2024-07-15 09:39:33.198769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.235 [2024-07-15 09:39:33.199006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.235 [2024-07-15 09:39:33.199226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.235 [2024-07-15 09:39:33.199235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.235 [2024-07-15 09:39:33.199243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.235 [2024-07-15 09:39:33.202738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.235 [2024-07-15 09:39:33.211812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.235 [2024-07-15 09:39:33.212380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.235 [2024-07-15 09:39:33.212418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.235 [2024-07-15 09:39:33.212429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.235 [2024-07-15 09:39:33.212667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.235 [2024-07-15 09:39:33.212898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.235 [2024-07-15 09:39:33.212909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.235 [2024-07-15 09:39:33.212917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.235 [2024-07-15 09:39:33.216417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.235 [2024-07-15 09:39:33.225690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.235 [2024-07-15 09:39:33.226350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.236 [2024-07-15 09:39:33.226388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.236 [2024-07-15 09:39:33.226398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.236 [2024-07-15 09:39:33.226635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.236 [2024-07-15 09:39:33.226866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.236 [2024-07-15 09:39:33.226876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.236 [2024-07-15 09:39:33.226884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.236 [2024-07-15 09:39:33.230382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.236 [2024-07-15 09:39:33.239458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.236 [2024-07-15 09:39:33.240140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.236 [2024-07-15 09:39:33.240177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.236 [2024-07-15 09:39:33.240188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.236 [2024-07-15 09:39:33.240429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.236 [2024-07-15 09:39:33.240649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.236 [2024-07-15 09:39:33.240659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.236 [2024-07-15 09:39:33.240666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.236 [2024-07-15 09:39:33.244174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.236 [2024-07-15 09:39:33.253242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.236 [2024-07-15 09:39:33.253850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.236 [2024-07-15 09:39:33.253888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.236 [2024-07-15 09:39:33.253901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.236 [2024-07-15 09:39:33.254140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.236 [2024-07-15 09:39:33.254360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.236 [2024-07-15 09:39:33.254370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.236 [2024-07-15 09:39:33.254378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.236 [2024-07-15 09:39:33.257885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.236 [2024-07-15 09:39:33.267152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.236 [2024-07-15 09:39:33.267789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.236 [2024-07-15 09:39:33.267827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.236 [2024-07-15 09:39:33.267837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.236 [2024-07-15 09:39:33.268073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.236 [2024-07-15 09:39:33.268294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.236 [2024-07-15 09:39:33.268303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.236 [2024-07-15 09:39:33.268311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.236 [2024-07-15 09:39:33.271818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.236 [2024-07-15 09:39:33.280894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.236 [2024-07-15 09:39:33.281544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.236 [2024-07-15 09:39:33.281582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.236 [2024-07-15 09:39:33.281592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.236 [2024-07-15 09:39:33.281839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.236 [2024-07-15 09:39:33.282060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.236 [2024-07-15 09:39:33.282070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.236 [2024-07-15 09:39:33.282081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.236 [2024-07-15 09:39:33.285579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.236 [2024-07-15 09:39:33.294637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.236 [2024-07-15 09:39:33.295279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.236 [2024-07-15 09:39:33.295317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.236 [2024-07-15 09:39:33.295328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.236 [2024-07-15 09:39:33.295564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.236 [2024-07-15 09:39:33.295794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.236 [2024-07-15 09:39:33.295804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.236 [2024-07-15 09:39:33.295812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.236 [2024-07-15 09:39:33.299309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.236 [2024-07-15 09:39:33.308373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.236 [2024-07-15 09:39:33.308947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.236 [2024-07-15 09:39:33.308967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.236 [2024-07-15 09:39:33.308975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.236 [2024-07-15 09:39:33.309192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.236 [2024-07-15 09:39:33.309409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.236 [2024-07-15 09:39:33.309417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.236 [2024-07-15 09:39:33.309424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.236 [2024-07-15 09:39:33.312923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.236 [2024-07-15 09:39:33.322191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.236 [2024-07-15 09:39:33.322750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.236 [2024-07-15 09:39:33.322771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.236 [2024-07-15 09:39:33.322779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.236 [2024-07-15 09:39:33.322995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.236 [2024-07-15 09:39:33.323211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.236 [2024-07-15 09:39:33.323221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.236 [2024-07-15 09:39:33.323227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.236 [2024-07-15 09:39:33.326716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.236 [2024-07-15 09:39:33.335982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.236 [2024-07-15 09:39:33.336545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.236 [2024-07-15 09:39:33.336560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.236 [2024-07-15 09:39:33.336567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.236 [2024-07-15 09:39:33.336798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.236 [2024-07-15 09:39:33.337015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.236 [2024-07-15 09:39:33.337024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.236 [2024-07-15 09:39:33.337031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.236 [2024-07-15 09:39:33.340520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.236 [2024-07-15 09:39:33.349791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.236 [2024-07-15 09:39:33.350416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.236 [2024-07-15 09:39:33.350453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.236 [2024-07-15 09:39:33.350464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.236 [2024-07-15 09:39:33.350700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.236 [2024-07-15 09:39:33.350931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.236 [2024-07-15 09:39:33.350942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.236 [2024-07-15 09:39:33.350949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.236 [2024-07-15 09:39:33.354447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.236 [2024-07-15 09:39:33.363718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.236 [2024-07-15 09:39:33.364397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.236 [2024-07-15 09:39:33.364434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.236 [2024-07-15 09:39:33.364445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.236 [2024-07-15 09:39:33.364681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.236 [2024-07-15 09:39:33.364912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.236 [2024-07-15 09:39:33.364922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.236 [2024-07-15 09:39:33.364929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.236 [2024-07-15 09:39:33.368432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.236 [2024-07-15 09:39:33.377520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.236 [2024-07-15 09:39:33.378190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.236 [2024-07-15 09:39:33.378227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.236 [2024-07-15 09:39:33.378238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.236 [2024-07-15 09:39:33.378475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.236 [2024-07-15 09:39:33.378700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.236 [2024-07-15 09:39:33.378710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.236 [2024-07-15 09:39:33.378718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.236 [2024-07-15 09:39:33.382235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.236 [2024-07-15 09:39:33.391322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.236 [2024-07-15 09:39:33.391969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.236 [2024-07-15 09:39:33.392007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.236 [2024-07-15 09:39:33.392018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.236 [2024-07-15 09:39:33.392254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.236 [2024-07-15 09:39:33.392475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.236 [2024-07-15 09:39:33.392484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.236 [2024-07-15 09:39:33.392492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.236 [2024-07-15 09:39:33.396003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.236 [2024-07-15 09:39:33.405089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.236 [2024-07-15 09:39:33.405767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.236 [2024-07-15 09:39:33.405805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.236 [2024-07-15 09:39:33.405816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.236 [2024-07-15 09:39:33.406052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.236 [2024-07-15 09:39:33.406273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.236 [2024-07-15 09:39:33.406282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.236 [2024-07-15 09:39:33.406289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.236 [2024-07-15 09:39:33.409790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.236 [2024-07-15 09:39:33.418857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.236 [2024-07-15 09:39:33.419315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.237 [2024-07-15 09:39:33.419334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.237 [2024-07-15 09:39:33.419342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.237 [2024-07-15 09:39:33.419559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.237 [2024-07-15 09:39:33.419781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.237 [2024-07-15 09:39:33.419791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.237 [2024-07-15 09:39:33.419798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.237 [2024-07-15 09:39:33.423300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.237 [2024-07-15 09:39:33.432795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.237 [2024-07-15 09:39:33.433443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.237 [2024-07-15 09:39:33.433481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.237 [2024-07-15 09:39:33.433492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.237 [2024-07-15 09:39:33.433728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.237 [2024-07-15 09:39:33.433957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.237 [2024-07-15 09:39:33.433968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.237 [2024-07-15 09:39:33.433975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.496 [2024-07-15 09:39:33.437490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.496 [2024-07-15 09:39:33.446576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.496 [2024-07-15 09:39:33.447120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.496 [2024-07-15 09:39:33.447139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.496 [2024-07-15 09:39:33.447147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.496 [2024-07-15 09:39:33.447364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.496 [2024-07-15 09:39:33.447581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.496 [2024-07-15 09:39:33.447590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.496 [2024-07-15 09:39:33.447597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.496 [2024-07-15 09:39:33.451098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.496 [2024-07-15 09:39:33.460383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.496 [2024-07-15 09:39:33.461025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.496 [2024-07-15 09:39:33.461063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.496 [2024-07-15 09:39:33.461074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.496 [2024-07-15 09:39:33.461311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.496 [2024-07-15 09:39:33.461531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.496 [2024-07-15 09:39:33.461541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.496 [2024-07-15 09:39:33.461549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.496 [2024-07-15 09:39:33.465191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.496 [2024-07-15 09:39:33.474278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.496 [2024-07-15 09:39:33.474887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.496 [2024-07-15 09:39:33.474924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.496 [2024-07-15 09:39:33.474939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.496 [2024-07-15 09:39:33.475176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.496 [2024-07-15 09:39:33.475396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.496 [2024-07-15 09:39:33.475405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.496 [2024-07-15 09:39:33.475412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.496 [2024-07-15 09:39:33.478916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.496 [2024-07-15 09:39:33.488198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.496 [2024-07-15 09:39:33.488781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.496 [2024-07-15 09:39:33.488819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.496 [2024-07-15 09:39:33.488830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.496 [2024-07-15 09:39:33.489067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.496 [2024-07-15 09:39:33.489287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.496 [2024-07-15 09:39:33.489297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.496 [2024-07-15 09:39:33.489304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.496 [2024-07-15 09:39:33.492812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.496 [2024-07-15 09:39:33.502086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.496 [2024-07-15 09:39:33.502761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.496 [2024-07-15 09:39:33.502798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.496 [2024-07-15 09:39:33.502810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.496 [2024-07-15 09:39:33.503049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.496 [2024-07-15 09:39:33.503270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.496 [2024-07-15 09:39:33.503279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.496 [2024-07-15 09:39:33.503286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.496 [2024-07-15 09:39:33.506786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.496 [2024-07-15 09:39:33.515853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.496 [2024-07-15 09:39:33.516534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.496 [2024-07-15 09:39:33.516572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.496 [2024-07-15 09:39:33.516582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.496 [2024-07-15 09:39:33.516827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.496 [2024-07-15 09:39:33.517049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.496 [2024-07-15 09:39:33.517064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.496 [2024-07-15 09:39:33.517071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.496 [2024-07-15 09:39:33.520568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.496 [2024-07-15 09:39:33.529639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.496 [2024-07-15 09:39:33.530272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.496 [2024-07-15 09:39:33.530310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.496 [2024-07-15 09:39:33.530321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.496 [2024-07-15 09:39:33.530557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.496 [2024-07-15 09:39:33.530786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.496 [2024-07-15 09:39:33.530797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.496 [2024-07-15 09:39:33.530805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.496 [2024-07-15 09:39:33.534302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.496 [2024-07-15 09:39:33.543380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.496 [2024-07-15 09:39:33.544063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.496 [2024-07-15 09:39:33.544100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.496 [2024-07-15 09:39:33.544111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.496 [2024-07-15 09:39:33.544347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.497 [2024-07-15 09:39:33.544568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.497 [2024-07-15 09:39:33.544577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.497 [2024-07-15 09:39:33.544585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.497 [2024-07-15 09:39:33.548091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.497 [2024-07-15 09:39:33.557162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.497 [2024-07-15 09:39:33.557741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.497 [2024-07-15 09:39:33.557765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.497 [2024-07-15 09:39:33.557773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.497 [2024-07-15 09:39:33.557990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.497 [2024-07-15 09:39:33.558207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.497 [2024-07-15 09:39:33.558216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.497 [2024-07-15 09:39:33.558222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.497 [2024-07-15 09:39:33.561715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.497 [2024-07-15 09:39:33.570991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.497 [2024-07-15 09:39:33.571517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.497 [2024-07-15 09:39:33.571533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.497 [2024-07-15 09:39:33.571541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.497 [2024-07-15 09:39:33.571763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.497 [2024-07-15 09:39:33.571980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.497 [2024-07-15 09:39:33.571989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.497 [2024-07-15 09:39:33.571996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.497 [2024-07-15 09:39:33.575485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.497 [2024-07-15 09:39:33.584755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.497 [2024-07-15 09:39:33.585426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.497 [2024-07-15 09:39:33.585464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.497 [2024-07-15 09:39:33.585476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.497 [2024-07-15 09:39:33.585714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.497 [2024-07-15 09:39:33.585943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.497 [2024-07-15 09:39:33.585954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.497 [2024-07-15 09:39:33.585961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.497 [2024-07-15 09:39:33.589461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.497 [2024-07-15 09:39:33.598570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.497 [2024-07-15 09:39:33.599236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.497 [2024-07-15 09:39:33.599275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.497 [2024-07-15 09:39:33.599285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.497 [2024-07-15 09:39:33.599521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.497 [2024-07-15 09:39:33.599742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.497 [2024-07-15 09:39:33.599761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.497 [2024-07-15 09:39:33.599769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.497 [2024-07-15 09:39:33.603267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.497 [2024-07-15 09:39:33.612319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.497 [2024-07-15 09:39:33.612993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.497 [2024-07-15 09:39:33.613031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.497 [2024-07-15 09:39:33.613042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.497 [2024-07-15 09:39:33.613283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.497 [2024-07-15 09:39:33.613503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.497 [2024-07-15 09:39:33.613513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.497 [2024-07-15 09:39:33.613520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.497 [2024-07-15 09:39:33.617028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.497 [2024-07-15 09:39:33.626101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.497 [2024-07-15 09:39:33.626806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.497 [2024-07-15 09:39:33.626844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.497 [2024-07-15 09:39:33.626855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.497 [2024-07-15 09:39:33.627091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.497 [2024-07-15 09:39:33.627312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.497 [2024-07-15 09:39:33.627321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.497 [2024-07-15 09:39:33.627329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.497 [2024-07-15 09:39:33.630835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.497 [2024-07-15 09:39:33.639919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.497 [2024-07-15 09:39:33.640572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.497 [2024-07-15 09:39:33.640610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.497 [2024-07-15 09:39:33.640621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.497 [2024-07-15 09:39:33.640865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.497 [2024-07-15 09:39:33.641087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.497 [2024-07-15 09:39:33.641097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.497 [2024-07-15 09:39:33.641105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.497 [2024-07-15 09:39:33.644603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.497 [2024-07-15 09:39:33.653666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.497 [2024-07-15 09:39:33.654253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.497 [2024-07-15 09:39:33.654272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.497 [2024-07-15 09:39:33.654280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.497 [2024-07-15 09:39:33.654497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.497 [2024-07-15 09:39:33.654715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.497 [2024-07-15 09:39:33.654724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.497 [2024-07-15 09:39:33.654736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.497 [2024-07-15 09:39:33.658283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.497 [2024-07-15 09:39:33.667554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.497 [2024-07-15 09:39:33.668225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.497 [2024-07-15 09:39:33.668263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.497 [2024-07-15 09:39:33.668274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.497 [2024-07-15 09:39:33.668510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.497 [2024-07-15 09:39:33.668730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.497 [2024-07-15 09:39:33.668740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.497 [2024-07-15 09:39:33.668747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.497 [2024-07-15 09:39:33.672256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.497 [2024-07-15 09:39:33.681324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.498 [2024-07-15 09:39:33.681870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.498 [2024-07-15 09:39:33.681908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.498 [2024-07-15 09:39:33.681921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.498 [2024-07-15 09:39:33.682160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.498 [2024-07-15 09:39:33.682380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.498 [2024-07-15 09:39:33.682389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.498 [2024-07-15 09:39:33.682397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.498 [2024-07-15 09:39:33.685903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.758 [2024-07-15 09:39:33.695176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.758 [2024-07-15 09:39:33.695834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.758 [2024-07-15 09:39:33.695872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.758 [2024-07-15 09:39:33.695884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.758 [2024-07-15 09:39:33.696125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.758 [2024-07-15 09:39:33.696345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.758 [2024-07-15 09:39:33.696355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.758 [2024-07-15 09:39:33.696362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.758 [2024-07-15 09:39:33.699873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.758 [2024-07-15 09:39:33.708945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.758 [2024-07-15 09:39:33.709602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.758 [2024-07-15 09:39:33.709640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.758 [2024-07-15 09:39:33.709650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.758 [2024-07-15 09:39:33.709894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.758 [2024-07-15 09:39:33.710115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.758 [2024-07-15 09:39:33.710125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.758 [2024-07-15 09:39:33.710132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.758 [2024-07-15 09:39:33.713629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.758 [2024-07-15 09:39:33.722727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.758 [2024-07-15 09:39:33.723369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.758 [2024-07-15 09:39:33.723407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.758 [2024-07-15 09:39:33.723417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.758 [2024-07-15 09:39:33.723653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.758 [2024-07-15 09:39:33.723880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.758 [2024-07-15 09:39:33.723890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.758 [2024-07-15 09:39:33.723898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.758 [2024-07-15 09:39:33.727395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.758 [2024-07-15 09:39:33.736465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.758 [2024-07-15 09:39:33.737049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.758 [2024-07-15 09:39:33.737069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.758 [2024-07-15 09:39:33.737078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.758 [2024-07-15 09:39:33.737294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.758 [2024-07-15 09:39:33.737511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.758 [2024-07-15 09:39:33.737520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.758 [2024-07-15 09:39:33.737527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.758 [2024-07-15 09:39:33.741035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.758 [2024-07-15 09:39:33.750309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.758 [2024-07-15 09:39:33.750878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.758 [2024-07-15 09:39:33.750916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.758 [2024-07-15 09:39:33.750928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.758 [2024-07-15 09:39:33.751167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.758 [2024-07-15 09:39:33.751392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.758 [2024-07-15 09:39:33.751402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.758 [2024-07-15 09:39:33.751409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.758 [2024-07-15 09:39:33.754916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.758 [2024-07-15 09:39:33.764191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.758 [2024-07-15 09:39:33.764851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.758 [2024-07-15 09:39:33.764890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.758 [2024-07-15 09:39:33.764902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.758 [2024-07-15 09:39:33.765142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.758 [2024-07-15 09:39:33.765363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.758 [2024-07-15 09:39:33.765373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.758 [2024-07-15 09:39:33.765382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.758 [2024-07-15 09:39:33.768890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.758 [2024-07-15 09:39:33.777958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.758 [2024-07-15 09:39:33.778501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.758 [2024-07-15 09:39:33.778520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.758 [2024-07-15 09:39:33.778528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.758 [2024-07-15 09:39:33.778745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.758 [2024-07-15 09:39:33.778969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.758 [2024-07-15 09:39:33.778979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.758 [2024-07-15 09:39:33.778986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.758 [2024-07-15 09:39:33.782479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.758 [2024-07-15 09:39:33.791749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.759 [2024-07-15 09:39:33.792416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.759 [2024-07-15 09:39:33.792454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.759 [2024-07-15 09:39:33.792464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.759 [2024-07-15 09:39:33.792700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.759 [2024-07-15 09:39:33.792928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.759 [2024-07-15 09:39:33.792939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.759 [2024-07-15 09:39:33.792947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.759 [2024-07-15 09:39:33.796450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.759 [2024-07-15 09:39:33.805526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.759 [2024-07-15 09:39:33.806174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.759 [2024-07-15 09:39:33.806213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.759 [2024-07-15 09:39:33.806224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.759 [2024-07-15 09:39:33.806460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.759 [2024-07-15 09:39:33.806680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.759 [2024-07-15 09:39:33.806691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.759 [2024-07-15 09:39:33.806698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.759 [2024-07-15 09:39:33.810205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.759 [2024-07-15 09:39:33.819281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.759 [2024-07-15 09:39:33.819891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.759 [2024-07-15 09:39:33.819929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.759 [2024-07-15 09:39:33.819941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.759 [2024-07-15 09:39:33.820180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.759 [2024-07-15 09:39:33.820401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.759 [2024-07-15 09:39:33.820410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.759 [2024-07-15 09:39:33.820418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.759 [2024-07-15 09:39:33.823927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.759 [2024-07-15 09:39:33.833205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.759 [2024-07-15 09:39:33.833791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.759 [2024-07-15 09:39:33.833811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.759 [2024-07-15 09:39:33.833819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.759 [2024-07-15 09:39:33.834036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.759 [2024-07-15 09:39:33.834253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.759 [2024-07-15 09:39:33.834263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.759 [2024-07-15 09:39:33.834270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.759 [2024-07-15 09:39:33.837778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.759 [2024-07-15 09:39:33.847055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.759 [2024-07-15 09:39:33.847725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.759 [2024-07-15 09:39:33.847769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.759 [2024-07-15 09:39:33.847786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.759 [2024-07-15 09:39:33.848024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.759 [2024-07-15 09:39:33.848245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.759 [2024-07-15 09:39:33.848255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.759 [2024-07-15 09:39:33.848262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.759 [2024-07-15 09:39:33.851766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.759 [2024-07-15 09:39:33.860837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.759 [2024-07-15 09:39:33.861379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.759 [2024-07-15 09:39:33.861399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.759 [2024-07-15 09:39:33.861406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.759 [2024-07-15 09:39:33.861623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.759 [2024-07-15 09:39:33.861846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.759 [2024-07-15 09:39:33.861855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.759 [2024-07-15 09:39:33.861862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.759 [2024-07-15 09:39:33.865373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.759 [2024-07-15 09:39:33.874655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.759 [2024-07-15 09:39:33.875320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.759 [2024-07-15 09:39:33.875358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.759 [2024-07-15 09:39:33.875370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.759 [2024-07-15 09:39:33.875608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.759 [2024-07-15 09:39:33.875835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.759 [2024-07-15 09:39:33.875846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.759 [2024-07-15 09:39:33.875853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.759 [2024-07-15 09:39:33.879351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.759 [2024-07-15 09:39:33.888423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.759 [2024-07-15 09:39:33.888983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.759 [2024-07-15 09:39:33.889021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.759 [2024-07-15 09:39:33.889033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.759 [2024-07-15 09:39:33.889270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.759 [2024-07-15 09:39:33.889491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.759 [2024-07-15 09:39:33.889505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.759 [2024-07-15 09:39:33.889512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.759 [2024-07-15 09:39:33.893019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.759 [2024-07-15 09:39:33.902298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.759 [2024-07-15 09:39:33.902900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.759 [2024-07-15 09:39:33.902937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.759 [2024-07-15 09:39:33.902949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.759 [2024-07-15 09:39:33.903187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.759 [2024-07-15 09:39:33.903407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.759 [2024-07-15 09:39:33.903416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.759 [2024-07-15 09:39:33.903424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.759 [2024-07-15 09:39:33.906928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.759 [2024-07-15 09:39:33.916209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.759 [2024-07-15 09:39:33.916793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.759 [2024-07-15 09:39:33.916814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.759 [2024-07-15 09:39:33.916821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.759 [2024-07-15 09:39:33.917038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.759 [2024-07-15 09:39:33.917255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.759 [2024-07-15 09:39:33.917266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.759 [2024-07-15 09:39:33.917273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.759 [2024-07-15 09:39:33.920776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.759 [2024-07-15 09:39:33.930052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.759 [2024-07-15 09:39:33.930717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.759 [2024-07-15 09:39:33.930761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.759 [2024-07-15 09:39:33.930773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.759 [2024-07-15 09:39:33.931010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.759 [2024-07-15 09:39:33.931230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.759 [2024-07-15 09:39:33.931241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.759 [2024-07-15 09:39:33.931248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.759 [2024-07-15 09:39:33.934748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.759 [2024-07-15 09:39:33.943841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.760 [2024-07-15 09:39:33.944426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.760 [2024-07-15 09:39:33.944446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:46.760 [2024-07-15 09:39:33.944453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:46.760 [2024-07-15 09:39:33.944670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:46.760 [2024-07-15 09:39:33.944894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.760 [2024-07-15 09:39:33.944904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.760 [2024-07-15 09:39:33.944911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.760 [2024-07-15 09:39:33.948406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.020 [2024-07-15 09:39:33.957678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.020 [2024-07-15 09:39:33.958257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-07-15 09:39:33.958273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.020 [2024-07-15 09:39:33.958281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.020 [2024-07-15 09:39:33.958497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.020 [2024-07-15 09:39:33.958713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.020 [2024-07-15 09:39:33.958722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.020 [2024-07-15 09:39:33.958729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.020 [2024-07-15 09:39:33.962224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.020 [2024-07-15 09:39:33.971493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.020 [2024-07-15 09:39:33.971999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-07-15 09:39:33.972016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.020 [2024-07-15 09:39:33.972024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.020 [2024-07-15 09:39:33.972240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.020 [2024-07-15 09:39:33.972457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.020 [2024-07-15 09:39:33.972466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.020 [2024-07-15 09:39:33.972473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.020 [2024-07-15 09:39:33.975968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.020 [2024-07-15 09:39:33.985237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.020 [2024-07-15 09:39:33.985654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-07-15 09:39:33.985674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.020 [2024-07-15 09:39:33.985682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.020 [2024-07-15 09:39:33.985913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.020 [2024-07-15 09:39:33.986132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.020 [2024-07-15 09:39:33.986140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.020 [2024-07-15 09:39:33.986147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.020 [2024-07-15 09:39:33.989641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.020 [2024-07-15 09:39:33.999125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.020 [2024-07-15 09:39:33.999820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-07-15 09:39:33.999858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.020 [2024-07-15 09:39:33.999870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.020 [2024-07-15 09:39:34.000110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.020 [2024-07-15 09:39:34.000331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.020 [2024-07-15 09:39:34.000340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.020 [2024-07-15 09:39:34.000348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.020 [2024-07-15 09:39:34.003853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.020 [2024-07-15 09:39:34.012924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.020 [2024-07-15 09:39:34.013556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-07-15 09:39:34.013594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.020 [2024-07-15 09:39:34.013605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.020 [2024-07-15 09:39:34.013849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.020 [2024-07-15 09:39:34.014071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.020 [2024-07-15 09:39:34.014081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.020 [2024-07-15 09:39:34.014088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.020 [2024-07-15 09:39:34.017587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.020 [2024-07-15 09:39:34.026658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.020 [2024-07-15 09:39:34.027336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-07-15 09:39:34.027374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.020 [2024-07-15 09:39:34.027385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.020 [2024-07-15 09:39:34.027621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.020 [2024-07-15 09:39:34.027849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.020 [2024-07-15 09:39:34.027859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.020 [2024-07-15 09:39:34.027875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.020 [2024-07-15 09:39:34.031374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.020 [2024-07-15 09:39:34.040459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.020 [2024-07-15 09:39:34.041169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-07-15 09:39:34.041207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.020 [2024-07-15 09:39:34.041218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.020 [2024-07-15 09:39:34.041455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.020 [2024-07-15 09:39:34.041675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.020 [2024-07-15 09:39:34.041685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.020 [2024-07-15 09:39:34.041692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.020 [2024-07-15 09:39:34.045198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.020 [2024-07-15 09:39:34.054268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.020 [2024-07-15 09:39:34.054898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-07-15 09:39:34.054936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.020 [2024-07-15 09:39:34.054948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.020 [2024-07-15 09:39:34.055188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.020 [2024-07-15 09:39:34.055408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.020 [2024-07-15 09:39:34.055417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.020 [2024-07-15 09:39:34.055425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.020 [2024-07-15 09:39:34.058931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.020 [2024-07-15 09:39:34.068208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.020 [2024-07-15 09:39:34.068805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-07-15 09:39:34.068831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.020 [2024-07-15 09:39:34.068839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.020 [2024-07-15 09:39:34.069061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.020 [2024-07-15 09:39:34.069279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.020 [2024-07-15 09:39:34.069287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.020 [2024-07-15 09:39:34.069295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.020 [2024-07-15 09:39:34.072794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.020 [2024-07-15 09:39:34.082067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.020 [2024-07-15 09:39:34.082619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-07-15 09:39:34.082637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.021 [2024-07-15 09:39:34.082644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.021 [2024-07-15 09:39:34.082865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.021 [2024-07-15 09:39:34.083083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.021 [2024-07-15 09:39:34.083091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.021 [2024-07-15 09:39:34.083098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.021 [2024-07-15 09:39:34.086592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.021 [2024-07-15 09:39:34.095869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.021 [2024-07-15 09:39:34.096533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-07-15 09:39:34.096571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.021 [2024-07-15 09:39:34.096584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.021 [2024-07-15 09:39:34.096830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.021 [2024-07-15 09:39:34.097052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.021 [2024-07-15 09:39:34.097061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.021 [2024-07-15 09:39:34.097068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.021 [2024-07-15 09:39:34.100567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.021 [2024-07-15 09:39:34.109645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.021 [2024-07-15 09:39:34.110291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-07-15 09:39:34.110330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.021 [2024-07-15 09:39:34.110341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.021 [2024-07-15 09:39:34.110578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.021 [2024-07-15 09:39:34.110808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.021 [2024-07-15 09:39:34.110819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.021 [2024-07-15 09:39:34.110827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.021 [2024-07-15 09:39:34.114326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.021 [2024-07-15 09:39:34.123397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.021 [2024-07-15 09:39:34.124098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-07-15 09:39:34.124136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.021 [2024-07-15 09:39:34.124147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.021 [2024-07-15 09:39:34.124383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.021 [2024-07-15 09:39:34.124609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.021 [2024-07-15 09:39:34.124619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.021 [2024-07-15 09:39:34.124626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.021 [2024-07-15 09:39:34.128132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.021 [2024-07-15 09:39:34.137208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.021 [2024-07-15 09:39:34.137792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-07-15 09:39:34.137812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.021 [2024-07-15 09:39:34.137820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.021 [2024-07-15 09:39:34.138037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.021 [2024-07-15 09:39:34.138253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.021 [2024-07-15 09:39:34.138262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.021 [2024-07-15 09:39:34.138269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.021 [2024-07-15 09:39:34.141779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.021 [2024-07-15 09:39:34.151056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.021 [2024-07-15 09:39:34.151722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-07-15 09:39:34.151767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.021 [2024-07-15 09:39:34.151779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.021 [2024-07-15 09:39:34.152017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.021 [2024-07-15 09:39:34.152237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.021 [2024-07-15 09:39:34.152247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.021 [2024-07-15 09:39:34.152255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.021 [2024-07-15 09:39:34.155754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.021 [2024-07-15 09:39:34.164906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.021 [2024-07-15 09:39:34.165583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-07-15 09:39:34.165621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.021 [2024-07-15 09:39:34.165632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.021 [2024-07-15 09:39:34.165876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.021 [2024-07-15 09:39:34.166097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.021 [2024-07-15 09:39:34.166107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.021 [2024-07-15 09:39:34.166115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.021 [2024-07-15 09:39:34.169622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.021 [2024-07-15 09:39:34.178695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.021 [2024-07-15 09:39:34.179338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-07-15 09:39:34.179376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.021 [2024-07-15 09:39:34.179387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.021 [2024-07-15 09:39:34.179623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.021 [2024-07-15 09:39:34.179849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.021 [2024-07-15 09:39:34.179860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.021 [2024-07-15 09:39:34.179867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.021 [2024-07-15 09:39:34.183370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.021 [2024-07-15 09:39:34.192444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.021 [2024-07-15 09:39:34.193075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-07-15 09:39:34.193114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.021 [2024-07-15 09:39:34.193124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.021 [2024-07-15 09:39:34.193360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.021 [2024-07-15 09:39:34.193581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.021 [2024-07-15 09:39:34.193590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.021 [2024-07-15 09:39:34.193598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.021 [2024-07-15 09:39:34.197106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.021 [2024-07-15 09:39:34.206182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.021 [2024-07-15 09:39:34.206704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-07-15 09:39:34.206723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.021 [2024-07-15 09:39:34.206731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.021 [2024-07-15 09:39:34.206952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.021 [2024-07-15 09:39:34.207170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.021 [2024-07-15 09:39:34.207179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.021 [2024-07-15 09:39:34.207186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.021 [2024-07-15 09:39:34.210676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.281 [2024-07-15 09:39:34.219957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.281 [2024-07-15 09:39:34.220528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.281 [2024-07-15 09:39:34.220544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.281 [2024-07-15 09:39:34.220556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.281 [2024-07-15 09:39:34.220778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.281 [2024-07-15 09:39:34.220994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.281 [2024-07-15 09:39:34.221004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.281 [2024-07-15 09:39:34.221011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.281 [2024-07-15 09:39:34.224506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.281 [2024-07-15 09:39:34.233782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.281 [2024-07-15 09:39:34.234351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.281 [2024-07-15 09:39:34.234367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.281 [2024-07-15 09:39:34.234375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.281 [2024-07-15 09:39:34.234590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.281 [2024-07-15 09:39:34.234812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.281 [2024-07-15 09:39:34.234821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.281 [2024-07-15 09:39:34.234828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.281 [2024-07-15 09:39:34.238321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.281 [2024-07-15 09:39:34.247607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.281 [2024-07-15 09:39:34.248302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.281 [2024-07-15 09:39:34.248340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.281 [2024-07-15 09:39:34.248352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.281 [2024-07-15 09:39:34.248589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.281 [2024-07-15 09:39:34.248817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.281 [2024-07-15 09:39:34.248828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.281 [2024-07-15 09:39:34.248835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.281 [2024-07-15 09:39:34.252332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.281 [2024-07-15 09:39:34.261401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.281 [2024-07-15 09:39:34.262053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.281 [2024-07-15 09:39:34.262092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.281 [2024-07-15 09:39:34.262103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.281 [2024-07-15 09:39:34.262339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.281 [2024-07-15 09:39:34.262560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.281 [2024-07-15 09:39:34.262574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.281 [2024-07-15 09:39:34.262582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.281 [2024-07-15 09:39:34.266090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.281 [2024-07-15 09:39:34.275161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.281 [2024-07-15 09:39:34.275757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.281 [2024-07-15 09:39:34.275777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.281 [2024-07-15 09:39:34.275784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.281 [2024-07-15 09:39:34.276001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.281 [2024-07-15 09:39:34.276218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.281 [2024-07-15 09:39:34.276226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.281 [2024-07-15 09:39:34.276234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.281 [2024-07-15 09:39:34.279723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.281 [2024-07-15 09:39:34.288998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.281 [2024-07-15 09:39:34.289532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.281 [2024-07-15 09:39:34.289548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.281 [2024-07-15 09:39:34.289556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.281 [2024-07-15 09:39:34.289778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.282 [2024-07-15 09:39:34.289995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.282 [2024-07-15 09:39:34.290004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.282 [2024-07-15 09:39:34.290011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.282 [2024-07-15 09:39:34.293501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.282 [2024-07-15 09:39:34.302773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.282 [2024-07-15 09:39:34.303365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.282 [2024-07-15 09:39:34.303381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.282 [2024-07-15 09:39:34.303389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.282 [2024-07-15 09:39:34.303604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.282 [2024-07-15 09:39:34.303827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.282 [2024-07-15 09:39:34.303838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.282 [2024-07-15 09:39:34.303845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.282 [2024-07-15 09:39:34.307334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.282 [2024-07-15 09:39:34.316606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.282 [2024-07-15 09:39:34.317153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.282 [2024-07-15 09:39:34.317169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.282 [2024-07-15 09:39:34.317177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.282 [2024-07-15 09:39:34.317392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.282 [2024-07-15 09:39:34.317609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.282 [2024-07-15 09:39:34.317618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.282 [2024-07-15 09:39:34.317625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.282 [2024-07-15 09:39:34.321120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.282 [2024-07-15 09:39:34.330394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.282 [2024-07-15 09:39:34.331116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.282 [2024-07-15 09:39:34.331153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.282 [2024-07-15 09:39:34.331164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.282 [2024-07-15 09:39:34.331401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.282 [2024-07-15 09:39:34.331621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.282 [2024-07-15 09:39:34.331631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.282 [2024-07-15 09:39:34.331639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.282 [2024-07-15 09:39:34.335145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.282 [2024-07-15 09:39:34.344228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.282 [2024-07-15 09:39:34.344879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.282 [2024-07-15 09:39:34.344918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.282 [2024-07-15 09:39:34.344930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.282 [2024-07-15 09:39:34.345168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.282 [2024-07-15 09:39:34.345388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.282 [2024-07-15 09:39:34.345398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.282 [2024-07-15 09:39:34.345406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.282 [2024-07-15 09:39:34.348913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.282 [2024-07-15 09:39:34.357982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.282 [2024-07-15 09:39:34.358628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.282 [2024-07-15 09:39:34.358666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.282 [2024-07-15 09:39:34.358677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.282 [2024-07-15 09:39:34.358925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.282 [2024-07-15 09:39:34.359147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.282 [2024-07-15 09:39:34.359157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.282 [2024-07-15 09:39:34.359164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.282 [2024-07-15 09:39:34.362660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.282 [2024-07-15 09:39:34.371728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.282 [2024-07-15 09:39:34.372375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.282 [2024-07-15 09:39:34.372413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.282 [2024-07-15 09:39:34.372424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.282 [2024-07-15 09:39:34.372660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.282 [2024-07-15 09:39:34.372888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.282 [2024-07-15 09:39:34.372898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.282 [2024-07-15 09:39:34.372906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.282 [2024-07-15 09:39:34.376402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.282 [2024-07-15 09:39:34.385478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.282 [2024-07-15 09:39:34.386172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.282 [2024-07-15 09:39:34.386210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.282 [2024-07-15 09:39:34.386220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.282 [2024-07-15 09:39:34.386456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.282 [2024-07-15 09:39:34.386677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.282 [2024-07-15 09:39:34.386687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.282 [2024-07-15 09:39:34.386694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.282 [2024-07-15 09:39:34.390204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.282 [2024-07-15 09:39:34.399278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.282 [2024-07-15 09:39:34.399955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.282 [2024-07-15 09:39:34.399992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.282 [2024-07-15 09:39:34.400004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.282 [2024-07-15 09:39:34.400241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.282 [2024-07-15 09:39:34.400462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.282 [2024-07-15 09:39:34.400471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.282 [2024-07-15 09:39:34.400483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.282 [2024-07-15 09:39:34.403989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.282 [2024-07-15 09:39:34.413060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.282 [2024-07-15 09:39:34.413638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.282 [2024-07-15 09:39:34.413658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.282 [2024-07-15 09:39:34.413666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.282 [2024-07-15 09:39:34.413889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.282 [2024-07-15 09:39:34.414107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.282 [2024-07-15 09:39:34.414115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.282 [2024-07-15 09:39:34.414122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.282 [2024-07-15 09:39:34.417613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.282 [2024-07-15 09:39:34.426935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.282 [2024-07-15 09:39:34.427569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.282 [2024-07-15 09:39:34.427607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.282 [2024-07-15 09:39:34.427619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.282 [2024-07-15 09:39:34.427865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.282 [2024-07-15 09:39:34.428086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.282 [2024-07-15 09:39:34.428096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.282 [2024-07-15 09:39:34.428103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.282 [2024-07-15 09:39:34.431601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.282 [2024-07-15 09:39:34.440679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.283 [2024-07-15 09:39:34.441342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.283 [2024-07-15 09:39:34.441381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.283 [2024-07-15 09:39:34.441391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.283 [2024-07-15 09:39:34.441628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.283 [2024-07-15 09:39:34.441855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.283 [2024-07-15 09:39:34.441865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.283 [2024-07-15 09:39:34.441873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.283 [2024-07-15 09:39:34.445369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.283 [2024-07-15 09:39:34.454436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.283 [2024-07-15 09:39:34.455110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.283 [2024-07-15 09:39:34.455148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.283 [2024-07-15 09:39:34.455159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.283 [2024-07-15 09:39:34.455395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.283 [2024-07-15 09:39:34.455615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.283 [2024-07-15 09:39:34.455625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.283 [2024-07-15 09:39:34.455632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.283 [2024-07-15 09:39:34.459141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.283 [2024-07-15 09:39:34.468207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.283 [2024-07-15 09:39:34.468851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.283 [2024-07-15 09:39:34.468889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.283 [2024-07-15 09:39:34.468902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.283 [2024-07-15 09:39:34.469141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.283 [2024-07-15 09:39:34.469362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.283 [2024-07-15 09:39:34.469371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.283 [2024-07-15 09:39:34.469379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.283 [2024-07-15 09:39:34.472885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.543 [2024-07-15 09:39:34.481955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.543 [2024-07-15 09:39:34.482655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.543 [2024-07-15 09:39:34.482693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.543 [2024-07-15 09:39:34.482705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.543 [2024-07-15 09:39:34.482951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.543 [2024-07-15 09:39:34.483172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.543 [2024-07-15 09:39:34.483181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.543 [2024-07-15 09:39:34.483189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.543 [2024-07-15 09:39:34.486684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.543 [2024-07-15 09:39:34.495763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.543 [2024-07-15 09:39:34.496423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.543 [2024-07-15 09:39:34.496461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.543 [2024-07-15 09:39:34.496472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.543 [2024-07-15 09:39:34.496708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.543 [2024-07-15 09:39:34.496942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.543 [2024-07-15 09:39:34.496953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.543 [2024-07-15 09:39:34.496960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.543 [2024-07-15 09:39:34.500458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.543 [2024-07-15 09:39:34.509526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.543 [2024-07-15 09:39:34.510196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.543 [2024-07-15 09:39:34.510234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.543 [2024-07-15 09:39:34.510245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.543 [2024-07-15 09:39:34.510481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.543 [2024-07-15 09:39:34.510701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.543 [2024-07-15 09:39:34.510710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.543 [2024-07-15 09:39:34.510718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.543 [2024-07-15 09:39:34.514225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.543 [2024-07-15 09:39:34.523296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.543 [2024-07-15 09:39:34.524011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.543 [2024-07-15 09:39:34.524049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.543 [2024-07-15 09:39:34.524060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.543 [2024-07-15 09:39:34.524296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.543 [2024-07-15 09:39:34.524516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.543 [2024-07-15 09:39:34.524526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.543 [2024-07-15 09:39:34.524533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.543 [2024-07-15 09:39:34.528038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.543 [2024-07-15 09:39:34.537101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.543 [2024-07-15 09:39:34.537788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.543 [2024-07-15 09:39:34.537827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.543 [2024-07-15 09:39:34.537839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.543 [2024-07-15 09:39:34.538078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.543 [2024-07-15 09:39:34.538298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.543 [2024-07-15 09:39:34.538308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.543 [2024-07-15 09:39:34.538315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.543 [2024-07-15 09:39:34.541834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.543 [2024-07-15 09:39:34.550900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.543 [2024-07-15 09:39:34.551572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.543 [2024-07-15 09:39:34.551609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.543 [2024-07-15 09:39:34.551620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.543 [2024-07-15 09:39:34.551864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.544 [2024-07-15 09:39:34.552086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.544 [2024-07-15 09:39:34.552096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.544 [2024-07-15 09:39:34.552103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.544 [2024-07-15 09:39:34.555600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.544 [2024-07-15 09:39:34.564668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.544 [2024-07-15 09:39:34.565346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.544 [2024-07-15 09:39:34.565383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.544 [2024-07-15 09:39:34.565394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.544 [2024-07-15 09:39:34.565630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.544 [2024-07-15 09:39:34.565860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.544 [2024-07-15 09:39:34.565871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.544 [2024-07-15 09:39:34.565878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.544 [2024-07-15 09:39:34.569379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.544 [2024-07-15 09:39:34.578446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.544 [2024-07-15 09:39:34.579113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.544 [2024-07-15 09:39:34.579150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.544 [2024-07-15 09:39:34.579161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.544 [2024-07-15 09:39:34.579397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.544 [2024-07-15 09:39:34.579617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.544 [2024-07-15 09:39:34.579626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.544 [2024-07-15 09:39:34.579634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.544 [2024-07-15 09:39:34.583140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.544 [2024-07-15 09:39:34.592207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.544 [2024-07-15 09:39:34.592888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.544 [2024-07-15 09:39:34.592926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.544 [2024-07-15 09:39:34.592945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.544 [2024-07-15 09:39:34.593181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.544 [2024-07-15 09:39:34.593402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.544 [2024-07-15 09:39:34.593411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.544 [2024-07-15 09:39:34.593418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.544 [2024-07-15 09:39:34.596926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.544 [2024-07-15 09:39:34.605990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.544 [2024-07-15 09:39:34.606640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.544 [2024-07-15 09:39:34.606678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.544 [2024-07-15 09:39:34.606689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.544 [2024-07-15 09:39:34.607086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.544 [2024-07-15 09:39:34.607357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.544 [2024-07-15 09:39:34.607368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.544 [2024-07-15 09:39:34.607376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.544 [2024-07-15 09:39:34.610876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.544 [2024-07-15 09:39:34.619735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.544 [2024-07-15 09:39:34.620415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.544 [2024-07-15 09:39:34.620453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.544 [2024-07-15 09:39:34.620463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.544 [2024-07-15 09:39:34.620700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.544 [2024-07-15 09:39:34.620929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.544 [2024-07-15 09:39:34.620940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.544 [2024-07-15 09:39:34.620947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.544 [2024-07-15 09:39:34.624447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.544 [2024-07-15 09:39:34.633521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.544 [2024-07-15 09:39:34.634085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.544 [2024-07-15 09:39:34.634103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.544 [2024-07-15 09:39:34.634111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.544 [2024-07-15 09:39:34.634327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.544 [2024-07-15 09:39:34.634543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.544 [2024-07-15 09:39:34.634557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.544 [2024-07-15 09:39:34.634564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.544 [2024-07-15 09:39:34.638062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.544 [2024-07-15 09:39:34.647343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.544 [2024-07-15 09:39:34.648038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.544 [2024-07-15 09:39:34.648076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.544 [2024-07-15 09:39:34.648088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.544 [2024-07-15 09:39:34.648324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.544 [2024-07-15 09:39:34.648545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.544 [2024-07-15 09:39:34.648554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.544 [2024-07-15 09:39:34.648562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.544 [2024-07-15 09:39:34.652068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.544 [2024-07-15 09:39:34.661140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.544 [2024-07-15 09:39:34.661835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.544 [2024-07-15 09:39:34.661873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.544 [2024-07-15 09:39:34.661884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.544 [2024-07-15 09:39:34.662121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.544 [2024-07-15 09:39:34.662341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.544 [2024-07-15 09:39:34.662350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.544 [2024-07-15 09:39:34.662358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.544 [2024-07-15 09:39:34.665860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.544 [2024-07-15 09:39:34.674922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.544 [2024-07-15 09:39:34.675500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.544 [2024-07-15 09:39:34.675519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.544 [2024-07-15 09:39:34.675527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.544 [2024-07-15 09:39:34.675744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.544 [2024-07-15 09:39:34.675966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.544 [2024-07-15 09:39:34.675976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.544 [2024-07-15 09:39:34.675982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.544 [2024-07-15 09:39:34.679478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.544 [2024-07-15 09:39:34.688759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.544 [2024-07-15 09:39:34.689405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.544 [2024-07-15 09:39:34.689443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.544 [2024-07-15 09:39:34.689453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.544 [2024-07-15 09:39:34.689689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.544 [2024-07-15 09:39:34.689918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.544 [2024-07-15 09:39:34.689929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.544 [2024-07-15 09:39:34.689937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.544 [2024-07-15 09:39:34.693435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.544 [2024-07-15 09:39:34.702501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.544 [2024-07-15 09:39:34.703177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.544 [2024-07-15 09:39:34.703215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.545 [2024-07-15 09:39:34.703225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.545 [2024-07-15 09:39:34.703462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.545 [2024-07-15 09:39:34.703682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.545 [2024-07-15 09:39:34.703692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.545 [2024-07-15 09:39:34.703700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.545 [2024-07-15 09:39:34.707208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.545 [2024-07-15 09:39:34.716285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.545 [2024-07-15 09:39:34.716846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.545 [2024-07-15 09:39:34.716885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.545 [2024-07-15 09:39:34.716896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.545 [2024-07-15 09:39:34.717133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.545 [2024-07-15 09:39:34.717353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.545 [2024-07-15 09:39:34.717363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.545 [2024-07-15 09:39:34.717370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.545 [2024-07-15 09:39:34.720874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.545 [2024-07-15 09:39:34.730150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.545 [2024-07-15 09:39:34.730829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.545 [2024-07-15 09:39:34.730867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.545 [2024-07-15 09:39:34.730879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.545 [2024-07-15 09:39:34.731125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.545 [2024-07-15 09:39:34.731346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.545 [2024-07-15 09:39:34.731356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.545 [2024-07-15 09:39:34.731363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.545 [2024-07-15 09:39:34.734869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.805 [2024-07-15 09:39:34.743947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.805 [2024-07-15 09:39:34.744526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.805 [2024-07-15 09:39:34.744545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.805 [2024-07-15 09:39:34.744553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.805 [2024-07-15 09:39:34.744777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.805 [2024-07-15 09:39:34.744995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.805 [2024-07-15 09:39:34.745004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.805 [2024-07-15 09:39:34.745011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.805 [2024-07-15 09:39:34.748505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.805 [2024-07-15 09:39:34.757774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.805 [2024-07-15 09:39:34.758417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.805 [2024-07-15 09:39:34.758455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.805 [2024-07-15 09:39:34.758466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.805 [2024-07-15 09:39:34.758702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.805 [2024-07-15 09:39:34.758930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.805 [2024-07-15 09:39:34.758941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.805 [2024-07-15 09:39:34.758949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.805 [2024-07-15 09:39:34.762449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.805 [2024-07-15 09:39:34.771517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.805 [2024-07-15 09:39:34.772197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.806 [2024-07-15 09:39:34.772235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.806 [2024-07-15 09:39:34.772246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.806 [2024-07-15 09:39:34.772482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.806 [2024-07-15 09:39:34.772703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.806 [2024-07-15 09:39:34.772712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.806 [2024-07-15 09:39:34.772724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.806 [2024-07-15 09:39:34.776236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.806 [2024-07-15 09:39:34.785305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.806 [2024-07-15 09:39:34.786009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.806 [2024-07-15 09:39:34.786047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.806 [2024-07-15 09:39:34.786058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.806 [2024-07-15 09:39:34.786295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.806 [2024-07-15 09:39:34.786516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.806 [2024-07-15 09:39:34.786525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.806 [2024-07-15 09:39:34.786533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.806 [2024-07-15 09:39:34.790037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.806 [2024-07-15 09:39:34.799106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.806 [2024-07-15 09:39:34.799770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.806 [2024-07-15 09:39:34.799808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.806 [2024-07-15 09:39:34.799821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.806 [2024-07-15 09:39:34.800058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.806 [2024-07-15 09:39:34.800279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.806 [2024-07-15 09:39:34.800289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.806 [2024-07-15 09:39:34.800296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.806 [2024-07-15 09:39:34.803796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.806 [2024-07-15 09:39:34.812865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.806 [2024-07-15 09:39:34.813448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.806 [2024-07-15 09:39:34.813467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.806 [2024-07-15 09:39:34.813475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.806 [2024-07-15 09:39:34.813691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.806 [2024-07-15 09:39:34.813916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.806 [2024-07-15 09:39:34.813926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.806 [2024-07-15 09:39:34.813933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.806 [2024-07-15 09:39:34.817425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.806 [2024-07-15 09:39:34.826694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.806 [2024-07-15 09:39:34.827349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.806 [2024-07-15 09:39:34.827386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.806 [2024-07-15 09:39:34.827397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.806 [2024-07-15 09:39:34.827633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.806 [2024-07-15 09:39:34.827862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.806 [2024-07-15 09:39:34.827872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.806 [2024-07-15 09:39:34.827879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.806 [2024-07-15 09:39:34.831377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.806 [2024-07-15 09:39:34.840470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.806 [2024-07-15 09:39:34.841101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.806 [2024-07-15 09:39:34.841139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.806 [2024-07-15 09:39:34.841150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.806 [2024-07-15 09:39:34.841386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.806 [2024-07-15 09:39:34.841607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.806 [2024-07-15 09:39:34.841616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.806 [2024-07-15 09:39:34.841623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.806 [2024-07-15 09:39:34.845128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.806 [2024-07-15 09:39:34.854406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.806 [2024-07-15 09:39:34.855102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.806 [2024-07-15 09:39:34.855140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.806 [2024-07-15 09:39:34.855151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.806 [2024-07-15 09:39:34.855387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.806 [2024-07-15 09:39:34.855607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.806 [2024-07-15 09:39:34.855617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.806 [2024-07-15 09:39:34.855624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.806 [2024-07-15 09:39:34.859128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.806 [2024-07-15 09:39:34.868196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.806 [2024-07-15 09:39:34.868636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.806 [2024-07-15 09:39:34.868657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.806 [2024-07-15 09:39:34.868665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.806 [2024-07-15 09:39:34.868894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.806 [2024-07-15 09:39:34.869113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.806 [2024-07-15 09:39:34.869121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.806 [2024-07-15 09:39:34.869128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.806 [2024-07-15 09:39:34.872624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.806 [2024-07-15 09:39:34.882097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.806 [2024-07-15 09:39:34.882664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.806 [2024-07-15 09:39:34.882680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.806 [2024-07-15 09:39:34.882688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.806 [2024-07-15 09:39:34.882910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.806 [2024-07-15 09:39:34.883127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.806 [2024-07-15 09:39:34.883136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.806 [2024-07-15 09:39:34.883143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.806 [2024-07-15 09:39:34.886632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.806 [2024-07-15 09:39:34.895901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.806 [2024-07-15 09:39:34.896564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.806 [2024-07-15 09:39:34.896602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.806 [2024-07-15 09:39:34.896613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.806 [2024-07-15 09:39:34.896859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.806 [2024-07-15 09:39:34.897080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.806 [2024-07-15 09:39:34.897090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.806 [2024-07-15 09:39:34.897097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.806 [2024-07-15 09:39:34.900598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.806 [2024-07-15 09:39:34.909667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.806 [2024-07-15 09:39:34.910299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.806 [2024-07-15 09:39:34.910337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.806 [2024-07-15 09:39:34.910348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.806 [2024-07-15 09:39:34.910584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.806 [2024-07-15 09:39:34.910814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.807 [2024-07-15 09:39:34.910824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.807 [2024-07-15 09:39:34.910831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.807 [2024-07-15 09:39:34.914335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.807 [2024-07-15 09:39:34.923401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.807 [2024-07-15 09:39:34.924100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.807 [2024-07-15 09:39:34.924139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.807 [2024-07-15 09:39:34.924149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.807 [2024-07-15 09:39:34.924386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.807 [2024-07-15 09:39:34.924606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.807 [2024-07-15 09:39:34.924616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.807 [2024-07-15 09:39:34.924623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.807 [2024-07-15 09:39:34.928127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.807 [2024-07-15 09:39:34.937193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.807 [2024-07-15 09:39:34.937857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.807 [2024-07-15 09:39:34.937895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.807 [2024-07-15 09:39:34.937907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.807 [2024-07-15 09:39:34.938146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.807 [2024-07-15 09:39:34.938366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.807 [2024-07-15 09:39:34.938375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.807 [2024-07-15 09:39:34.938383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.807 [2024-07-15 09:39:34.941900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.807 [2024-07-15 09:39:34.950968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.807 [2024-07-15 09:39:34.951626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.807 [2024-07-15 09:39:34.951664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.807 [2024-07-15 09:39:34.951675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.807 [2024-07-15 09:39:34.951920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.807 [2024-07-15 09:39:34.952142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.807 [2024-07-15 09:39:34.952151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.807 [2024-07-15 09:39:34.952159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.807 [2024-07-15 09:39:34.955656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.807 [2024-07-15 09:39:34.964723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.807 [2024-07-15 09:39:34.965313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.807 [2024-07-15 09:39:34.965332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.807 [2024-07-15 09:39:34.965345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.807 [2024-07-15 09:39:34.965562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.807 [2024-07-15 09:39:34.965786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.807 [2024-07-15 09:39:34.965795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.807 [2024-07-15 09:39:34.965802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.807 [2024-07-15 09:39:34.969293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.807 [2024-07-15 09:39:34.978558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.807 [2024-07-15 09:39:34.979229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.807 [2024-07-15 09:39:34.979267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.807 [2024-07-15 09:39:34.979277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.807 [2024-07-15 09:39:34.979514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.807 [2024-07-15 09:39:34.979734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.807 [2024-07-15 09:39:34.979744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.807 [2024-07-15 09:39:34.979760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.807 [2024-07-15 09:39:34.983259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.807 [2024-07-15 09:39:34.992332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.807 [2024-07-15 09:39:34.993028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.807 [2024-07-15 09:39:34.993066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:47.807 [2024-07-15 09:39:34.993077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:47.807 [2024-07-15 09:39:34.993313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:47.807 [2024-07-15 09:39:34.993534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.807 [2024-07-15 09:39:34.993543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.807 [2024-07-15 09:39:34.993550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.807 [2024-07-15 09:39:34.997057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.068 [2024-07-15 09:39:35.006125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.068 [2024-07-15 09:39:35.006835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-07-15 09:39:35.006872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.068 [2024-07-15 09:39:35.006885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.068 [2024-07-15 09:39:35.007124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.068 [2024-07-15 09:39:35.007344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.068 [2024-07-15 09:39:35.007358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.068 [2024-07-15 09:39:35.007366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.068 [2024-07-15 09:39:35.010874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.068 [2024-07-15 09:39:35.019942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.068 [2024-07-15 09:39:35.020621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-07-15 09:39:35.020658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.068 [2024-07-15 09:39:35.020670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.068 [2024-07-15 09:39:35.020917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.068 [2024-07-15 09:39:35.021138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.068 [2024-07-15 09:39:35.021147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.068 [2024-07-15 09:39:35.021155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.068 [2024-07-15 09:39:35.024652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.068 [2024-07-15 09:39:35.033731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.068 [2024-07-15 09:39:35.034395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-07-15 09:39:35.034433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.068 [2024-07-15 09:39:35.034444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.068 [2024-07-15 09:39:35.034681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.068 [2024-07-15 09:39:35.034910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.068 [2024-07-15 09:39:35.034920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.068 [2024-07-15 09:39:35.034927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.068 [2024-07-15 09:39:35.038427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.068 [2024-07-15 09:39:35.047515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.068 [2024-07-15 09:39:35.048183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-07-15 09:39:35.048221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.068 [2024-07-15 09:39:35.048231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.068 [2024-07-15 09:39:35.048467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.068 [2024-07-15 09:39:35.048688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.068 [2024-07-15 09:39:35.048698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.068 [2024-07-15 09:39:35.048705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.068 [2024-07-15 09:39:35.052215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.068 [2024-07-15 09:39:35.061284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.068 [2024-07-15 09:39:35.061939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-07-15 09:39:35.061977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.068 [2024-07-15 09:39:35.061988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.068 [2024-07-15 09:39:35.062224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.068 [2024-07-15 09:39:35.062445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.068 [2024-07-15 09:39:35.062454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.068 [2024-07-15 09:39:35.062462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.068 [2024-07-15 09:39:35.065970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.068 [2024-07-15 09:39:35.075035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.068 [2024-07-15 09:39:35.075576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-07-15 09:39:35.075595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.068 [2024-07-15 09:39:35.075603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.068 [2024-07-15 09:39:35.075827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.068 [2024-07-15 09:39:35.076045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.068 [2024-07-15 09:39:35.076054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.068 [2024-07-15 09:39:35.076062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.068 [2024-07-15 09:39:35.079553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.068 [2024-07-15 09:39:35.088824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.068 [2024-07-15 09:39:35.089429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-07-15 09:39:35.089467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.068 [2024-07-15 09:39:35.089478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.068 [2024-07-15 09:39:35.089714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.068 [2024-07-15 09:39:35.089945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.068 [2024-07-15 09:39:35.089955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.068 [2024-07-15 09:39:35.089963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.068 [2024-07-15 09:39:35.093461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 893978 Killed "${NVMF_APP[@]}" "$@" 00:30:48.068 09:39:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:48.068 09:39:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:48.068 09:39:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:48.068 09:39:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:48.068 09:39:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.068 [2024-07-15 09:39:35.102735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.068 [2024-07-15 09:39:35.103176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-07-15 09:39:35.103197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.068 [2024-07-15 09:39:35.103205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.068 [2024-07-15 09:39:35.103423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.068 [2024-07-15 09:39:35.103640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.068 [2024-07-15 09:39:35.103650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.069 [2024-07-15 09:39:35.103657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.069 09:39:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=895686 00:30:48.069 09:39:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 895686 00:30:48.069 09:39:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:48.069 09:39:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 895686 ']' 00:30:48.069 09:39:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.069 09:39:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:48.069 [2024-07-15 09:39:35.107166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.069 09:39:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.069 09:39:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:48.069 09:39:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.069 [2024-07-15 09:39:35.116661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.069 [2024-07-15 09:39:35.117221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-07-15 09:39:35.117238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.069 [2024-07-15 09:39:35.117246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.069 [2024-07-15 09:39:35.117462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.069 [2024-07-15 09:39:35.117679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.069 [2024-07-15 09:39:35.117687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.069 [2024-07-15 09:39:35.117694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.069 [2024-07-15 09:39:35.121196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.069 [2024-07-15 09:39:35.130480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.069 [2024-07-15 09:39:35.131080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-07-15 09:39:35.131117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.069 [2024-07-15 09:39:35.131127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.069 [2024-07-15 09:39:35.131368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.069 [2024-07-15 09:39:35.131588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.069 [2024-07-15 09:39:35.131596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.069 [2024-07-15 09:39:35.131604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.069 [2024-07-15 09:39:35.135112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.069 [2024-07-15 09:39:35.144400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.069 [2024-07-15 09:39:35.145078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-07-15 09:39:35.145115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.069 [2024-07-15 09:39:35.145126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.069 [2024-07-15 09:39:35.145362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.069 [2024-07-15 09:39:35.145582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.069 [2024-07-15 09:39:35.145590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.069 [2024-07-15 09:39:35.145597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.069 [2024-07-15 09:39:35.149102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.069 [2024-07-15 09:39:35.158171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.069 [2024-07-15 09:39:35.158812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-07-15 09:39:35.158849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.069 [2024-07-15 09:39:35.158860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.069 [2024-07-15 09:39:35.159097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.069 [2024-07-15 09:39:35.159316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.069 [2024-07-15 09:39:35.159324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.069 [2024-07-15 09:39:35.159332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.069 [2024-07-15 09:39:35.160474] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:30:48.069 [2024-07-15 09:39:35.160520] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.069 [2024-07-15 09:39:35.162839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.069 [2024-07-15 09:39:35.171913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.069 [2024-07-15 09:39:35.172568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-07-15 09:39:35.172604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.069 [2024-07-15 09:39:35.172615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.069 [2024-07-15 09:39:35.172858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.069 [2024-07-15 09:39:35.173083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.069 [2024-07-15 09:39:35.173092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.069 [2024-07-15 09:39:35.173100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.069 [2024-07-15 09:39:35.176597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.069 [2024-07-15 09:39:35.185664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.069 [2024-07-15 09:39:35.186289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-07-15 09:39:35.186326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.069 [2024-07-15 09:39:35.186337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.069 [2024-07-15 09:39:35.186573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.069 [2024-07-15 09:39:35.186801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.069 [2024-07-15 09:39:35.186810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.069 [2024-07-15 09:39:35.186818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.069 [2024-07-15 09:39:35.190315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.069 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.069 [2024-07-15 09:39:35.199673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.069 [2024-07-15 09:39:35.200328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-07-15 09:39:35.200364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.069 [2024-07-15 09:39:35.200376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.069 [2024-07-15 09:39:35.200612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.069 [2024-07-15 09:39:35.200839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.069 [2024-07-15 09:39:35.200848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.069 [2024-07-15 09:39:35.200855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.069 [2024-07-15 09:39:35.204355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.069 [2024-07-15 09:39:35.213423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.069 [2024-07-15 09:39:35.214120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-07-15 09:39:35.214157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.069 [2024-07-15 09:39:35.214168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.069 [2024-07-15 09:39:35.214406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.069 [2024-07-15 09:39:35.214627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.069 [2024-07-15 09:39:35.214636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.069 [2024-07-15 09:39:35.214647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.069 [2024-07-15 09:39:35.218153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.069 [2024-07-15 09:39:35.227221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.069 [2024-07-15 09:39:35.227865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-07-15 09:39:35.227902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.069 [2024-07-15 09:39:35.227915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.069 [2024-07-15 09:39:35.228153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.069 [2024-07-15 09:39:35.228374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.069 [2024-07-15 09:39:35.228382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.069 [2024-07-15 09:39:35.228390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.069 [2024-07-15 09:39:35.231898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.069 [2024-07-15 09:39:35.240980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.069 [2024-07-15 09:39:35.241661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-07-15 09:39:35.241698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.070 [2024-07-15 09:39:35.241709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.070 [2024-07-15 09:39:35.241952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.070 [2024-07-15 09:39:35.242174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.070 [2024-07-15 09:39:35.242183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.070 [2024-07-15 09:39:35.242190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.070 [2024-07-15 09:39:35.245684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.070 [2024-07-15 09:39:35.249454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:48.070 [2024-07-15 09:39:35.254757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.070 [2024-07-15 09:39:35.255415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.070 [2024-07-15 09:39:35.255452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.070 [2024-07-15 09:39:35.255462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.070 [2024-07-15 09:39:35.255699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.070 [2024-07-15 09:39:35.255926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.070 [2024-07-15 09:39:35.255935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.070 [2024-07-15 09:39:35.255943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.070 [2024-07-15 09:39:35.259444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.331 [2024-07-15 09:39:35.268515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.331 [2024-07-15 09:39:35.269110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.331 [2024-07-15 09:39:35.269128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.331 [2024-07-15 09:39:35.269136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.331 [2024-07-15 09:39:35.269353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.331 [2024-07-15 09:39:35.269569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.331 [2024-07-15 09:39:35.269576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.331 [2024-07-15 09:39:35.269584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.331 [2024-07-15 09:39:35.273081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.331 [2024-07-15 09:39:35.282357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.331 [2024-07-15 09:39:35.283019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.331 [2024-07-15 09:39:35.283058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.331 [2024-07-15 09:39:35.283069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.331 [2024-07-15 09:39:35.283306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.331 [2024-07-15 09:39:35.283527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.331 [2024-07-15 09:39:35.283535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.331 [2024-07-15 09:39:35.283542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.331 [2024-07-15 09:39:35.287045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.331 [2024-07-15 09:39:35.296111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.331 [2024-07-15 09:39:35.296774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.331 [2024-07-15 09:39:35.296812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.331 [2024-07-15 09:39:35.296824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.331 [2024-07-15 09:39:35.297062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.331 [2024-07-15 09:39:35.297283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.331 [2024-07-15 09:39:35.297291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.331 [2024-07-15 09:39:35.297299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.331 [2024-07-15 09:39:35.300804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.331 [2024-07-15 09:39:35.302918] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.331 [2024-07-15 09:39:35.302940] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.331 [2024-07-15 09:39:35.302946] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:48.331 [2024-07-15 09:39:35.302951] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:48.331 [2024-07-15 09:39:35.302956] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.331 [2024-07-15 09:39:35.303193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:48.331 [2024-07-15 09:39:35.303316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.331 [2024-07-15 09:39:35.303318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:48.331 [2024-07-15 09:39:35.309876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.331 [2024-07-15 09:39:35.310552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.331 [2024-07-15 09:39:35.310590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.331 [2024-07-15 09:39:35.310601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.331 [2024-07-15 09:39:35.310846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.331 [2024-07-15 09:39:35.311066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.331 [2024-07-15 09:39:35.311076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.331 [2024-07-15 09:39:35.311084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.331 [2024-07-15 09:39:35.314578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.331 [2024-07-15 09:39:35.323646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.331 [2024-07-15 09:39:35.324172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.331 [2024-07-15 09:39:35.324209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.331 [2024-07-15 09:39:35.324221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.331 [2024-07-15 09:39:35.324458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.331 [2024-07-15 09:39:35.324678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.331 [2024-07-15 09:39:35.324686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.331 [2024-07-15 09:39:35.324693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.331 [2024-07-15 09:39:35.328197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.331 [2024-07-15 09:39:35.337470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.331 [2024-07-15 09:39:35.338021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.331 [2024-07-15 09:39:35.338058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.331 [2024-07-15 09:39:35.338069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.331 [2024-07-15 09:39:35.338305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.331 [2024-07-15 09:39:35.338526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.331 [2024-07-15 09:39:35.338534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.331 [2024-07-15 09:39:35.338542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.331 [2024-07-15 09:39:35.342058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.332 [2024-07-15 09:39:35.351333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.332 [2024-07-15 09:39:35.352079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.332 [2024-07-15 09:39:35.352116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.332 [2024-07-15 09:39:35.352127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.332 [2024-07-15 09:39:35.352363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.332 [2024-07-15 09:39:35.352583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.332 [2024-07-15 09:39:35.352591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.332 [2024-07-15 09:39:35.352599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.332 [2024-07-15 09:39:35.356103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.332 [2024-07-15 09:39:35.365167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.332 [2024-07-15 09:39:35.365745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.332 [2024-07-15 09:39:35.365787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.332 [2024-07-15 09:39:35.365798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.332 [2024-07-15 09:39:35.366034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.332 [2024-07-15 09:39:35.366254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.332 [2024-07-15 09:39:35.366262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.332 [2024-07-15 09:39:35.366270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.332 [2024-07-15 09:39:35.369768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.332 [2024-07-15 09:39:35.379033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.332 [2024-07-15 09:39:35.379719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.332 [2024-07-15 09:39:35.379761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.332 [2024-07-15 09:39:35.379772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.332 [2024-07-15 09:39:35.380009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.332 [2024-07-15 09:39:35.380228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.332 [2024-07-15 09:39:35.380236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.332 [2024-07-15 09:39:35.380244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.332 [2024-07-15 09:39:35.383740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.332 [2024-07-15 09:39:35.392809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.332 [2024-07-15 09:39:35.393319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.332 [2024-07-15 09:39:35.393356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.332 [2024-07-15 09:39:35.393367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.332 [2024-07-15 09:39:35.393603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.332 [2024-07-15 09:39:35.393834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.332 [2024-07-15 09:39:35.393843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.332 [2024-07-15 09:39:35.393851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.332 [2024-07-15 09:39:35.397348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.332 [2024-07-15 09:39:35.406616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.332 [2024-07-15 09:39:35.407109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.332 [2024-07-15 09:39:35.407146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.332 [2024-07-15 09:39:35.407158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.332 [2024-07-15 09:39:35.407396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.332 [2024-07-15 09:39:35.407615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.332 [2024-07-15 09:39:35.407623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.332 [2024-07-15 09:39:35.407631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.332 [2024-07-15 09:39:35.411136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.332 [2024-07-15 09:39:35.420405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.332 [2024-07-15 09:39:35.421378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.332 [2024-07-15 09:39:35.421399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.332 [2024-07-15 09:39:35.421408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.332 [2024-07-15 09:39:35.421627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.332 [2024-07-15 09:39:35.421848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.332 [2024-07-15 09:39:35.421856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.332 [2024-07-15 09:39:35.421863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.332 [2024-07-15 09:39:35.425357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.332 [2024-07-15 09:39:35.434212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.332 [2024-07-15 09:39:35.434779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.332 [2024-07-15 09:39:35.434797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.332 [2024-07-15 09:39:35.434804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.332 [2024-07-15 09:39:35.435022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.332 [2024-07-15 09:39:35.435238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.332 [2024-07-15 09:39:35.435246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.332 [2024-07-15 09:39:35.435253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.332 [2024-07-15 09:39:35.438763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.332 [2024-07-15 09:39:35.448171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.332 [2024-07-15 09:39:35.448815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.333 [2024-07-15 09:39:35.448852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.333 [2024-07-15 09:39:35.448865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.333 [2024-07-15 09:39:35.449102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.333 [2024-07-15 09:39:35.449323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.333 [2024-07-15 09:39:35.449332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.333 [2024-07-15 09:39:35.449339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.333 [2024-07-15 09:39:35.452847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.333 [2024-07-15 09:39:35.461917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.333 [2024-07-15 09:39:35.462592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.333 [2024-07-15 09:39:35.462629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.333 [2024-07-15 09:39:35.462640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.333 [2024-07-15 09:39:35.462883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.333 [2024-07-15 09:39:35.463103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.333 [2024-07-15 09:39:35.463112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.333 [2024-07-15 09:39:35.463119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.333 [2024-07-15 09:39:35.466617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.333 [2024-07-15 09:39:35.475687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.333 [2024-07-15 09:39:35.476397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.333 [2024-07-15 09:39:35.476434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.333 [2024-07-15 09:39:35.476445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.333 [2024-07-15 09:39:35.476682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.333 [2024-07-15 09:39:35.476908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.333 [2024-07-15 09:39:35.476917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.333 [2024-07-15 09:39:35.476924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.333 [2024-07-15 09:39:35.480421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.333 [2024-07-15 09:39:35.489491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.333 [2024-07-15 09:39:35.490130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.333 [2024-07-15 09:39:35.490168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.333 [2024-07-15 09:39:35.490186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.333 [2024-07-15 09:39:35.490422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.333 [2024-07-15 09:39:35.490642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.333 [2024-07-15 09:39:35.490651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.333 [2024-07-15 09:39:35.490658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.333 [2024-07-15 09:39:35.494161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.333 [2024-07-15 09:39:35.503232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.333 [2024-07-15 09:39:35.503799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.333 [2024-07-15 09:39:35.503825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.333 [2024-07-15 09:39:35.503834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.333 [2024-07-15 09:39:35.504056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.333 [2024-07-15 09:39:35.504273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.333 [2024-07-15 09:39:35.504283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.333 [2024-07-15 09:39:35.504290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.333 [2024-07-15 09:39:35.507790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.333 [2024-07-15 09:39:35.517059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.333 [2024-07-15 09:39:35.517689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.333 [2024-07-15 09:39:35.517725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.333 [2024-07-15 09:39:35.517736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.333 [2024-07-15 09:39:35.517981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.333 [2024-07-15 09:39:35.518202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.333 [2024-07-15 09:39:35.518211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.333 [2024-07-15 09:39:35.518218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.333 [2024-07-15 09:39:35.521715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.594 [2024-07-15 09:39:35.530994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.594 [2024-07-15 09:39:35.531607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.594 [2024-07-15 09:39:35.531625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.594 [2024-07-15 09:39:35.531633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.594 [2024-07-15 09:39:35.531855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.594 [2024-07-15 09:39:35.532071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.594 [2024-07-15 09:39:35.532083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.594 [2024-07-15 09:39:35.532090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.594 [2024-07-15 09:39:35.535604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.594 [2024-07-15 09:39:35.544892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.595 [2024-07-15 09:39:35.545347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.595 [2024-07-15 09:39:35.545363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.595 [2024-07-15 09:39:35.545370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.595 [2024-07-15 09:39:35.545586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.595 [2024-07-15 09:39:35.545808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.595 [2024-07-15 09:39:35.545817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.595 [2024-07-15 09:39:35.545824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.595 [2024-07-15 09:39:35.549313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.595 [2024-07-15 09:39:35.558791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.595 [2024-07-15 09:39:35.559439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.595 [2024-07-15 09:39:35.559475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.595 [2024-07-15 09:39:35.559486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.595 [2024-07-15 09:39:35.559722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.595 [2024-07-15 09:39:35.559950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.595 [2024-07-15 09:39:35.559959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.595 [2024-07-15 09:39:35.559967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.595 [2024-07-15 09:39:35.563463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.595 [2024-07-15 09:39:35.572538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.595 [2024-07-15 09:39:35.573093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.595 [2024-07-15 09:39:35.573130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.595 [2024-07-15 09:39:35.573141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.595 [2024-07-15 09:39:35.573377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.595 [2024-07-15 09:39:35.573597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.595 [2024-07-15 09:39:35.573606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.595 [2024-07-15 09:39:35.573613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.595 [2024-07-15 09:39:35.577119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.595 [2024-07-15 09:39:35.586409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.595 [2024-07-15 09:39:35.587002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.595 [2024-07-15 09:39:35.587039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.595 [2024-07-15 09:39:35.587050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.595 [2024-07-15 09:39:35.587286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.595 [2024-07-15 09:39:35.587506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.595 [2024-07-15 09:39:35.587514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.595 [2024-07-15 09:39:35.587521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.595 [2024-07-15 09:39:35.591025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.595 [2024-07-15 09:39:35.600303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.595 [2024-07-15 09:39:35.600904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.595 [2024-07-15 09:39:35.600924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.595 [2024-07-15 09:39:35.600931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.595 [2024-07-15 09:39:35.601148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.595 [2024-07-15 09:39:35.601364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.595 [2024-07-15 09:39:35.601372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.595 [2024-07-15 09:39:35.601378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.595 [2024-07-15 09:39:35.604875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.595 [2024-07-15 09:39:35.614163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.595 [2024-07-15 09:39:35.614854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.595 [2024-07-15 09:39:35.614891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.595 [2024-07-15 09:39:35.614904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.595 [2024-07-15 09:39:35.615141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.595 [2024-07-15 09:39:35.615361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.595 [2024-07-15 09:39:35.615370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.595 [2024-07-15 09:39:35.615377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.595 [2024-07-15 09:39:35.618882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.595 [2024-07-15 09:39:35.627953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.595 [2024-07-15 09:39:35.628635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.595 [2024-07-15 09:39:35.628672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.595 [2024-07-15 09:39:35.628683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.595 [2024-07-15 09:39:35.628931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.595 [2024-07-15 09:39:35.629152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.595 [2024-07-15 09:39:35.629160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.595 [2024-07-15 09:39:35.629167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.595 [2024-07-15 09:39:35.632665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.595 [2024-07-15 09:39:35.641746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.595 [2024-07-15 09:39:35.642310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.595 [2024-07-15 09:39:35.642347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.595 [2024-07-15 09:39:35.642359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.595 [2024-07-15 09:39:35.642596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.595 [2024-07-15 09:39:35.642822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.595 [2024-07-15 09:39:35.642832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.595 [2024-07-15 09:39:35.642839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.595 [2024-07-15 09:39:35.646337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.595 [2024-07-15 09:39:35.655613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.595 [2024-07-15 09:39:35.656314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.595 [2024-07-15 09:39:35.656351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.595 [2024-07-15 09:39:35.656362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.595 [2024-07-15 09:39:35.656598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.595 [2024-07-15 09:39:35.656827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.595 [2024-07-15 09:39:35.656836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.595 [2024-07-15 09:39:35.656843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.595 [2024-07-15 09:39:35.660339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.595 [2024-07-15 09:39:35.669410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.595 [2024-07-15 09:39:35.670106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.595 [2024-07-15 09:39:35.670144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.595 [2024-07-15 09:39:35.670155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.595 [2024-07-15 09:39:35.670391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.595 [2024-07-15 09:39:35.670611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.595 [2024-07-15 09:39:35.670619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.595 [2024-07-15 09:39:35.670631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.595 [2024-07-15 09:39:35.674138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.595 [2024-07-15 09:39:35.683211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.595 [2024-07-15 09:39:35.683890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.595 [2024-07-15 09:39:35.683927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.595 [2024-07-15 09:39:35.683938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.595 [2024-07-15 09:39:35.684173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.595 [2024-07-15 09:39:35.684393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.595 [2024-07-15 09:39:35.684401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.595 [2024-07-15 09:39:35.684408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.596 [2024-07-15 09:39:35.687913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.596 [2024-07-15 09:39:35.696985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.596 [2024-07-15 09:39:35.697632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.596 [2024-07-15 09:39:35.697669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.596 [2024-07-15 09:39:35.697680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.596 [2024-07-15 09:39:35.697924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.596 [2024-07-15 09:39:35.698145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.596 [2024-07-15 09:39:35.698154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.596 [2024-07-15 09:39:35.698161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.596 [2024-07-15 09:39:35.701658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.596 [2024-07-15 09:39:35.710766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.596 [2024-07-15 09:39:35.711468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.596 [2024-07-15 09:39:35.711504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.596 [2024-07-15 09:39:35.711515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.596 [2024-07-15 09:39:35.711760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.596 [2024-07-15 09:39:35.711981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.596 [2024-07-15 09:39:35.711991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.596 [2024-07-15 09:39:35.711998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.596 [2024-07-15 09:39:35.715495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.596 [2024-07-15 09:39:35.724568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.596 [2024-07-15 09:39:35.725273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.596 [2024-07-15 09:39:35.725310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.596 [2024-07-15 09:39:35.725321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.596 [2024-07-15 09:39:35.725557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.596 [2024-07-15 09:39:35.725785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.596 [2024-07-15 09:39:35.725794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.596 [2024-07-15 09:39:35.725801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.596 [2024-07-15 09:39:35.729302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.596 [2024-07-15 09:39:35.738377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.596 [2024-07-15 09:39:35.739031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.596 [2024-07-15 09:39:35.739068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.596 [2024-07-15 09:39:35.739079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.596 [2024-07-15 09:39:35.739315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.596 [2024-07-15 09:39:35.739534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.596 [2024-07-15 09:39:35.739542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.596 [2024-07-15 09:39:35.739550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.596 [2024-07-15 09:39:35.743065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.596 [2024-07-15 09:39:35.752137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.596 [2024-07-15 09:39:35.752835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.596 [2024-07-15 09:39:35.752872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.596 [2024-07-15 09:39:35.752884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.596 [2024-07-15 09:39:35.753124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.596 [2024-07-15 09:39:35.753351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.596 [2024-07-15 09:39:35.753360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.596 [2024-07-15 09:39:35.753367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.596 [2024-07-15 09:39:35.756875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.596 [2024-07-15 09:39:35.765946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.596 [2024-07-15 09:39:35.766457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.596 [2024-07-15 09:39:35.766494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.596 [2024-07-15 09:39:35.766505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.596 [2024-07-15 09:39:35.766741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.596 [2024-07-15 09:39:35.766974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.596 [2024-07-15 09:39:35.766983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.596 [2024-07-15 09:39:35.766990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.596 [2024-07-15 09:39:35.770486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.596 [2024-07-15 09:39:35.779766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.596 [2024-07-15 09:39:35.780365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.596 [2024-07-15 09:39:35.780384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.596 [2024-07-15 09:39:35.780391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.596 [2024-07-15 09:39:35.780608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.596 [2024-07-15 09:39:35.780830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.596 [2024-07-15 09:39:35.780838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.596 [2024-07-15 09:39:35.780845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.596 [2024-07-15 09:39:35.784337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.858 [2024-07-15 09:39:35.793613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.858 [2024-07-15 09:39:35.794046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-07-15 09:39:35.794063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.858 [2024-07-15 09:39:35.794070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.858 [2024-07-15 09:39:35.794286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.858 [2024-07-15 09:39:35.794502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.858 [2024-07-15 09:39:35.794511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.858 [2024-07-15 09:39:35.794517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.858 [2024-07-15 09:39:35.798013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.858 [2024-07-15 09:39:35.807489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.858 [2024-07-15 09:39:35.808165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-07-15 09:39:35.808202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.858 [2024-07-15 09:39:35.808213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.858 [2024-07-15 09:39:35.808450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.858 [2024-07-15 09:39:35.808669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.858 [2024-07-15 09:39:35.808678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.858 [2024-07-15 09:39:35.808686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.858 [2024-07-15 09:39:35.812198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.858 [2024-07-15 09:39:35.821274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.858 [2024-07-15 09:39:35.821878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-07-15 09:39:35.821916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.858 [2024-07-15 09:39:35.821928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.858 [2024-07-15 09:39:35.822167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.858 [2024-07-15 09:39:35.822387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.858 [2024-07-15 09:39:35.822395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.858 [2024-07-15 09:39:35.822403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.858 [2024-07-15 09:39:35.825910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.858 [2024-07-15 09:39:35.835186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.858 [2024-07-15 09:39:35.835739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-07-15 09:39:35.835763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.858 [2024-07-15 09:39:35.835771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.858 [2024-07-15 09:39:35.835988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.858 [2024-07-15 09:39:35.836204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.858 [2024-07-15 09:39:35.836212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.858 [2024-07-15 09:39:35.836219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.858 [2024-07-15 09:39:35.839711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.858 [2024-07-15 09:39:35.848998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.858 [2024-07-15 09:39:35.849535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-07-15 09:39:35.849572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.858 [2024-07-15 09:39:35.849584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.858 [2024-07-15 09:39:35.849828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.858 [2024-07-15 09:39:35.850049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.858 [2024-07-15 09:39:35.850058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.858 [2024-07-15 09:39:35.850065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.858 [2024-07-15 09:39:35.853565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.858 [2024-07-15 09:39:35.862853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.858 [2024-07-15 09:39:35.863544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-07-15 09:39:35.863580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.858 [2024-07-15 09:39:35.863597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.858 [2024-07-15 09:39:35.863843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.858 [2024-07-15 09:39:35.864065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.858 [2024-07-15 09:39:35.864074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.858 [2024-07-15 09:39:35.864082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.858 [2024-07-15 09:39:35.867581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.858 [2024-07-15 09:39:35.876657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.858 [2024-07-15 09:39:35.877164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-07-15 09:39:35.877202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.858 [2024-07-15 09:39:35.877213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.858 [2024-07-15 09:39:35.877449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.858 [2024-07-15 09:39:35.877669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.858 [2024-07-15 09:39:35.877678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.859 [2024-07-15 09:39:35.877685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.859 [2024-07-15 09:39:35.881193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.859 [2024-07-15 09:39:35.890472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.859 [2024-07-15 09:39:35.891159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-07-15 09:39:35.891196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.859 [2024-07-15 09:39:35.891207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.859 [2024-07-15 09:39:35.891443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.859 [2024-07-15 09:39:35.891663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.859 [2024-07-15 09:39:35.891671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.859 [2024-07-15 09:39:35.891678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.859 [2024-07-15 09:39:35.895185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.859 [2024-07-15 09:39:35.904257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.859 [2024-07-15 09:39:35.904830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-07-15 09:39:35.904868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.859 [2024-07-15 09:39:35.904879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.859 [2024-07-15 09:39:35.905115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.859 [2024-07-15 09:39:35.905335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.859 [2024-07-15 09:39:35.905347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.859 [2024-07-15 09:39:35.905355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.859 [2024-07-15 09:39:35.908863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.859 [2024-07-15 09:39:35.918142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.859 [2024-07-15 09:39:35.918696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-07-15 09:39:35.918733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.859 [2024-07-15 09:39:35.918744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.859 [2024-07-15 09:39:35.918988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.859 [2024-07-15 09:39:35.919208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.859 [2024-07-15 09:39:35.919217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.859 [2024-07-15 09:39:35.919224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.859 [2024-07-15 09:39:35.922722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.859 09:39:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:48.859 09:39:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:30:48.859 09:39:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:48.859 09:39:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:48.859 09:39:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.859 [2024-07-15 09:39:35.932004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.859 [2024-07-15 09:39:35.932549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-07-15 09:39:35.932585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.859 [2024-07-15 09:39:35.932596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.859 [2024-07-15 09:39:35.932840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.859 [2024-07-15 09:39:35.933061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.859 [2024-07-15 09:39:35.933069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.859 [2024-07-15 09:39:35.933077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.859 [2024-07-15 09:39:35.936577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.859 [2024-07-15 09:39:35.945871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.859 [2024-07-15 09:39:35.946544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-07-15 09:39:35.946581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.859 [2024-07-15 09:39:35.946592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.859 [2024-07-15 09:39:35.946835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.859 [2024-07-15 09:39:35.947056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.859 [2024-07-15 09:39:35.947069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.859 [2024-07-15 09:39:35.947076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.859 [2024-07-15 09:39:35.950576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.859 [2024-07-15 09:39:35.959648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.859 [2024-07-15 09:39:35.960338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-07-15 09:39:35.960375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.859 [2024-07-15 09:39:35.960387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.859 [2024-07-15 09:39:35.960624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.859 [2024-07-15 09:39:35.960852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.859 [2024-07-15 09:39:35.960861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.859 [2024-07-15 09:39:35.960868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.859 09:39:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:48.859 [2024-07-15 09:39:35.964369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.859 09:39:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:48.859 09:39:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.860 09:39:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.860 [2024-07-15 09:39:35.969070] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:48.860 [2024-07-15 09:39:35.973441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.860 [2024-07-15 09:39:35.973989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-07-15 09:39:35.974026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.860 [2024-07-15 09:39:35.974037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.860 09:39:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.860 [2024-07-15 09:39:35.974273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.860 [2024-07-15 09:39:35.974493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.860 [2024-07-15 09:39:35.974501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.860 [2024-07-15 09:39:35.974509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.860 09:39:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:48.860 09:39:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.860 09:39:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.860 [2024-07-15 09:39:35.978015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.860 [2024-07-15 09:39:35.987292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.860 [2024-07-15 09:39:35.988074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-07-15 09:39:35.988111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.860 [2024-07-15 09:39:35.988126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.860 [2024-07-15 09:39:35.988362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.860 [2024-07-15 09:39:35.988583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.860 [2024-07-15 09:39:35.988591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.860 [2024-07-15 09:39:35.988599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.860 [2024-07-15 09:39:35.992103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.860 [2024-07-15 09:39:36.001173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.860 [2024-07-15 09:39:36.001859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-07-15 09:39:36.001897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.860 [2024-07-15 09:39:36.001907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.860 [2024-07-15 09:39:36.002144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.860 [2024-07-15 09:39:36.002363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.860 [2024-07-15 09:39:36.002371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.860 [2024-07-15 09:39:36.002379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.860 Malloc0 00:30:48.860 09:39:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.860 [2024-07-15 09:39:36.005884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.860 09:39:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:48.860 09:39:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.860 09:39:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.860 [2024-07-15 09:39:36.014955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.860 [2024-07-15 09:39:36.015502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-07-15 09:39:36.015539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.860 [2024-07-15 09:39:36.015549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.860 [2024-07-15 09:39:36.015793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.860 [2024-07-15 09:39:36.016014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.860 [2024-07-15 09:39:36.016022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.860 [2024-07-15 09:39:36.016030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.860 09:39:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.860 09:39:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:48.860 09:39:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.860 09:39:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.860 [2024-07-15 09:39:36.019525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.860 [2024-07-15 09:39:36.028803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.860 09:39:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.860 [2024-07-15 09:39:36.029494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-07-15 09:39:36.029531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf18540 with addr=10.0.0.2, port=4420 00:30:48.860 [2024-07-15 09:39:36.029542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18540 is same with the state(5) to be set 00:30:48.860 09:39:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:48.860 [2024-07-15 09:39:36.029785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf18540 (9): Bad file descriptor 00:30:48.860 [2024-07-15 09:39:36.030006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.860 [2024-07-15 09:39:36.030015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.860 [2024-07-15 09:39:36.030022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.860 09:39:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.860 09:39:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.860 [2024-07-15 09:39:36.033520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.860 [2024-07-15 09:39:36.036553] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.860 09:39:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.860 09:39:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 894610 00:30:48.860 [2024-07-15 09:39:36.042607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.120 [2024-07-15 09:39:36.076368] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:57.316 00:30:57.316 Latency(us) 00:30:57.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:57.316 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:57.316 Verification LBA range: start 0x0 length 0x4000 00:30:57.316 Nvme1n1 : 15.01 8385.63 32.76 9823.13 0.00 7003.75 778.24 16274.77 00:30:57.316 =================================================================================================================== 00:30:57.316 Total : 8385.63 32.76 9823.13 0.00 7003.75 778.24 16274.77 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:57.576 rmmod nvme_tcp 00:30:57.576 rmmod nvme_fabrics 00:30:57.576 rmmod nvme_keyring 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 895686 ']' 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 895686 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 895686 ']' 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 895686 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 895686 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 895686' 00:30:57.576 killing process with pid 895686 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 895686 00:30:57.576 09:39:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 895686 00:30:57.837 09:39:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:57.837 09:39:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:57.837 09:39:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:57.837 09:39:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:57.837 09:39:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:57.837 09:39:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.837 09:39:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:57.837 09:39:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.749 09:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:59.749 00:30:59.749 real 0m28.680s 00:30:59.749 user 1m2.840s 00:30:59.749 sys 0m7.883s 00:30:59.749 09:39:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:59.749 09:39:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:59.749 ************************************ 00:30:59.749 END TEST nvmf_bdevperf 00:30:59.749 ************************************ 00:31:00.012 09:39:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:00.012 09:39:46 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:00.012 09:39:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:00.012 09:39:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:00.012 09:39:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:00.012 ************************************ 00:31:00.012 START TEST nvmf_target_disconnect 00:31:00.012 ************************************ 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:00.012 * Looking for test storage... 00:31:00.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:31:00.012 09:39:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:08.171 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:08.172 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:08.172 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:08.172 Found net devices under 0000:31:00.0: cvl_0_0 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:08.172 Found net devices under 0000:31:00.1: cvl_0_1 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:08.172 09:39:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:08.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:08.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:31:08.172 00:31:08.172 --- 10.0.0.2 ping statistics --- 00:31:08.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.172 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:08.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:08.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:31:08.172 00:31:08.172 --- 10.0.0.1 ping statistics --- 00:31:08.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.172 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:08.172 ************************************ 00:31:08.172 START TEST nvmf_target_disconnect_tc1 00:31:08.172 ************************************ 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:31:08.172 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:08.172 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.434 [2024-07-15 09:39:55.392235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.434 [2024-07-15 09:39:55.392290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc24b0 with addr=10.0.0.2, port=4420 00:31:08.434 [2024-07-15 09:39:55.392322] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:08.434 [2024-07-15 09:39:55.392339] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:08.434 [2024-07-15 09:39:55.392347] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:31:08.434 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:08.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:08.434 Initializing NVMe Controllers 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:08.434 00:31:08.434 real 0m0.123s 00:31:08.434 user 0m0.043s 00:31:08.434 sys 0m0.079s 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:08.434 ************************************ 00:31:08.434 END TEST nvmf_target_disconnect_tc1 00:31:08.434 ************************************ 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:08.434 ************************************ 00:31:08.434 START TEST nvmf_target_disconnect_tc2 00:31:08.434 ************************************ 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=902163 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 902163 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 902163 ']' 00:31:08.434 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:08.435 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.435 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:08.435 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.435 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:08.435 09:39:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:08.435 [2024-07-15 09:39:55.548987] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:31:08.435 [2024-07-15 09:39:55.549050] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.435 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.697 [2024-07-15 09:39:55.647314] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:08.697 [2024-07-15 09:39:55.744005] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.697 [2024-07-15 09:39:55.744065] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.697 [2024-07-15 09:39:55.744074] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:08.697 [2024-07-15 09:39:55.744081] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:08.697 [2024-07-15 09:39:55.744087] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.697 [2024-07-15 09:39:55.744687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:31:08.697 [2024-07-15 09:39:55.744836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:31:08.697 [2024-07-15 09:39:55.745356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:31:08.697 [2024-07-15 09:39:55.745361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.272 Malloc0 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.272 [2024-07-15 09:39:56.421936] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.272 [2024-07-15 09:39:56.450233] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=902433 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:31:09.272 09:39:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:09.533 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.460 09:39:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 902163 00:31:11.460 09:39:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:31:11.460 Read completed with error (sct=0, sc=8) 00:31:11.460 starting I/O failed 00:31:11.460 Read completed with error (sct=0, sc=8) 00:31:11.460 starting I/O failed 00:31:11.460 Read completed with error (sct=0, sc=8) 00:31:11.460 starting I/O failed 00:31:11.460 Read completed with error (sct=0, sc=8) 00:31:11.460 starting I/O failed 00:31:11.460 Read completed with error (sct=0, sc=8) 00:31:11.460 starting I/O failed 00:31:11.460 Read completed with error (sct=0, sc=8) 00:31:11.460 starting I/O failed 00:31:11.460 Read completed with error (sct=0, sc=8) 00:31:11.460 starting I/O failed 00:31:11.460 Read completed with error (sct=0, sc=8) 00:31:11.460 starting I/O failed 00:31:11.460 Read completed with error (sct=0, sc=8) 00:31:11.460 starting I/O failed 00:31:11.460 Read completed with error (sct=0, sc=8) 00:31:11.460 starting I/O failed 00:31:11.460 Write completed with error (sct=0, sc=8) 00:31:11.460 starting I/O failed 00:31:11.460 Read completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Read completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Read completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Read completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Read completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Read completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Write completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Read completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Read completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Read completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Read completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Read completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Write completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Read completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Read completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Write completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Read completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Read completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Read completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Read completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 Write completed with error (sct=0, sc=8) 00:31:11.461 starting I/O failed 00:31:11.461 [2024-07-15 09:39:58.478142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:11.461 [2024-07-15 09:39:58.478401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.478423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.478489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.478503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.478723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.478733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.479038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.479072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.479449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.479462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.479760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.479771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.480155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.480164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.480430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.480439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.480540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.480548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.480790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.480801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.481169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.481179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.481476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.481485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.481836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.481845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.482158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.482167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.482468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.482478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.482806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.482816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.483139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.483149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.483346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.483357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.483732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.483742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.484021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.484031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.484231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.484244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.484561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.484571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.484903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.484913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.485212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.485221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.485517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.485526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.485837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.485847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.486194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.486204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.486510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.486520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.486826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.486836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.487176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.487186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.487535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.487544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.487687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.487698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.461 qpair failed and we were unable to recover it. 00:31:11.461 [2024-07-15 09:39:58.488019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.461 [2024-07-15 09:39:58.488029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.488372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.488382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.488673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.488683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.489026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.489036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.489429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.489438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.489747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.489761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.489982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.489991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.490230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.490240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.490417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.490429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.490775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.490785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.491100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.491110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.491425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.491435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.491839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.491848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.492170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.492180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.492370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.492379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.492699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.492709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.493051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.493061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.493355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.493365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.493682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.493691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.493925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.493934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.494249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.494259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.494614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.494623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.494953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.494964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.495259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.495268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.495605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.495614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.495935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.495944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.496294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.496302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.496594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.496612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.496948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.496958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.497255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.497265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.497543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.497552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.497878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.497887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.498242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.498251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.498556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.498565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.498876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.498886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.499234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.499243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.499584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.499593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.499913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.499923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.500249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.500258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.500420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.500430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.500658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.462 [2024-07-15 09:39:58.500668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.462 qpair failed and we were unable to recover it. 00:31:11.462 [2024-07-15 09:39:58.500941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.500951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.501165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.501173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.501478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.501487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.501801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.501811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.502149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.502158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.502462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.502472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.502829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.502838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.503147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.503158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.503473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.503482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.503792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.503802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.504123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.504133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.504379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.504388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.504695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.504705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.505020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.505030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.505223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.505232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.505557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.505567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.505778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.505788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.506104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.506113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.506313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.506323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.506657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.506666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.506993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.507003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.507348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.507359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.507673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.507684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.508004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.508016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.508243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.508254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.508599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.508610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.508920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.508933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.509242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.509253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.509561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.509573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.509910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.509922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.510258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.510269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.510590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.510602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.510942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.510953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.511152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.511164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.511488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.511500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.511835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.511846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.512132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.512143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.512452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.512463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.512748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.512775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.513096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.513107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.463 [2024-07-15 09:39:58.513298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.463 [2024-07-15 09:39:58.513310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.463 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.513640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.513652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.514016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.514028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.514357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.514368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.514553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.514565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.514927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.514938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.515266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.515277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.515627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.515641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.516017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.516028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.516202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.516214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.516512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.516523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.516812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.516824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.517125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.517137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.517475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.517486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.517823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.517836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.518187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.518199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.518504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.518516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.518859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.518871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.519204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.519220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.519568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.519579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.519873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.519884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.520218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.520229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.520370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.520383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.520689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.520701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.521014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.521030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.521241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.521256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.521579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.521594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.521904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.521921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.522247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.522262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.522600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.522615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.522929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.522945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.523204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.523219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.523623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.523638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.523915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.523931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.524276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.524291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.524634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.524649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.524807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.524824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.525220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.525235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.525575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.525590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.525914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.525930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.526234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.526249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.464 [2024-07-15 09:39:58.526570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.464 [2024-07-15 09:39:58.526585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.464 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.526894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.526909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.527212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.527228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.527440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.527457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.527808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.527824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.528135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.528151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.528459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.528475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.528827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.528843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.529161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.529176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.529487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.529502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.529820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.529836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.530146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.530161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.530503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.530519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.530847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.530867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.531183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.531203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.531566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.531586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.531898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.531918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.532280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.532300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.532524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.532543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.532884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.532904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.533256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.533276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.533607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.533626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.533981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.534002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.534227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.534247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.534622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.534642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.535031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.535051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.535355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.535374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.535705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.535724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.536072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.536092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.536406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.536432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.536639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.536660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.537006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.537027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.537387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-07-15 09:39:58.537406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-07-15 09:39:58.537750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.537784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.538114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.538134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.538472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.538491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.538807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.538826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.539158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.539177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.539521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.539540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.539793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.539812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.540156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.540175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.540484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.540503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.540820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.540840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.541074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.541095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.541399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.541418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.541780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.541802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.542225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.542245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.542555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.542574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.542946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.542967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.543292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.543312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.543657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.543676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.543943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.543965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.544305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.544332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.544695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.544721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.545095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.545123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.545480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.545507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.545888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.545916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.546268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.546295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.546646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.546673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.547042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.547071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.547327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.547354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.547721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.547748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.548158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.548186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.548448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.548474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.548838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.548865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.549217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.549243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.549569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.549594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.550024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.550051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.550394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.550420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.550788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.550817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.551193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.551219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.551556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.551582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.551947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.551973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-07-15 09:39:58.552300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-07-15 09:39:58.552333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.552690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.552717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.553147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.553176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.553529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.553556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.553918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.553947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.554289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.554315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.554651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.554677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.554905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.554936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.555310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.555337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.555684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.555711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.556085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.556113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.556482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.556509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.556866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.556894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.557140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.557166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.557547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.557574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.557798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.557825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.558201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.558227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.558566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.558592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.558924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.558952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.559205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.559234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.559609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.559636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.559973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.560001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.560377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.560405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.560768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.560796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.561101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.561128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.561488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.561515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.561878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.561906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.562264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.562291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.562537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.562564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.562929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.562958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.563303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.563330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.563582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.563609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.563986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.564013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.564379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.564405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.564774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.564801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.565160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.565186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.565538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.565564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.565931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.565959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.566342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.566368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.566721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.566747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.567124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-07-15 09:39:58.567157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-07-15 09:39:58.567510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.567536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.567893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.567921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.568278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.568306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.568607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.568634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.568872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.568901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.569248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.569275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.569601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.569628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.569987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.570016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.570381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.570409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.570772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.570800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.571161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.571188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.571413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.571440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.571683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.571710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.572088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.572117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.572362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.572391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.572772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.572802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.573054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.573081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.573431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.573457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.573844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.573873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.574214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.574242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.574611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.574638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.574960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.574988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.575333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.575358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.575698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.575725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.576070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.576098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.576473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.576500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.576873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.576901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.577264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.577290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.577536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.577563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.577936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.577965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.578274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.578301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.578552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.578579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.578904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.578932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.579276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.579303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.579659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.579685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.580068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-07-15 09:39:58.580096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-07-15 09:39:58.580449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.580476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.580829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.580857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.581215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.581241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.581506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.581538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.581902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.581931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.582311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.582337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.582563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.582591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.582937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.582965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.583313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.583339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.583659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.583685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.584019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.584047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.584405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.584432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.584745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.584781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.585118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.585144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.585494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.585520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.585880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.585908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.586244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.586270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.586611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.586638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.587027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.587054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.587379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.587404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.587779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.587806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.588138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.588165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.588372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.588399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.588770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.588797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.589028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.589054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.589406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.589433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.589786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.589814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.590093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.590120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.590361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.590390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.590771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.590799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.591145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.591172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.591526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.591552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.591911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.591939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-07-15 09:39:58.592282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-07-15 09:39:58.592307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.592663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.592690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.593072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.593100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.593460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.593486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.593834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.593863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.594256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.594282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.594519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.594544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.594834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.594862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.595195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.595221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.595591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.595618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.595971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.596004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.596361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.596388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.596760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.596788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.597117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.597144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.597520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.597546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.597959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.597988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.598320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.598347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.598711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.598737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.599063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.599090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.599460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.599486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.599724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.599759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.600097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.600123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.600457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.600483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.600814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.600842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.601207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.601234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.601592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.601619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.601980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.602008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.602346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.602373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.602599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.602627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.602981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.603009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.603371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.603397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.603761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.603789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.604138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.604165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.604472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.604498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.604869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.604897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.605136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.605167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.605501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.605528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.605910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.605939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.606314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.606340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.606687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.606713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-07-15 09:39:58.606973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-07-15 09:39:58.607001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.607450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.607476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.607695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.607722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.607989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.608017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.608390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.608417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.608734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.608770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.609019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.609045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.609396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.609424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.609782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.609811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.610150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.610177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.610533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.610566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.610936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.610965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.611332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.611365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.611726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.611760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.612072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.612099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.612447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.612473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.612820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.612846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.613234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.613260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.613618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.613644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.613983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.614011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.614349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.614376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.614708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.614735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.615124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.615153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.615477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.615504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.615690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.615719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.616095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.616123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.616462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.616489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.616826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.616854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.617198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.617225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.617590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.617617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.617966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.617994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.618364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.618391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.618794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.618822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.619174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.619200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.619567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.619593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.619974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.620001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.620365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.620391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.620763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.620791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.621145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.621173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.621510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.621537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.621875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-07-15 09:39:58.621904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-07-15 09:39:58.622262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.622289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.622627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.622654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.622997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.623024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.623393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.623420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.623789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.623817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.624162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.624187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.624327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.624355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.624689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.624717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.625028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.625056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.625413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.625446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.625801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.625830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.626217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.626244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.626491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.626518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.626773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.626800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.627178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.627204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.627546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.627573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.627810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.627837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.628208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.628235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.628582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.628608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.628942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.628969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.629365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.629391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.629731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.629768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.630116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.630149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-07-15 09:39:58.630513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-07-15 09:39:58.630541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b50000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Write completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Write completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Write completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Write completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Write completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Write completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Write completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Write completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Write completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 [2024-07-15 09:39:58.630830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Write completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.472 starting I/O failed 00:31:11.472 Read completed with error (sct=0, sc=8) 00:31:11.473 starting I/O failed 00:31:11.473 Write completed with error (sct=0, sc=8) 00:31:11.473 starting I/O failed 00:31:11.473 Read completed with error (sct=0, sc=8) 00:31:11.473 starting I/O failed 00:31:11.473 Read completed with error (sct=0, sc=8) 00:31:11.473 starting I/O failed 00:31:11.473 Write completed with error (sct=0, sc=8) 00:31:11.473 starting I/O failed 00:31:11.473 Read completed with error (sct=0, sc=8) 00:31:11.473 starting I/O failed 00:31:11.473 Write completed with error (sct=0, sc=8) 00:31:11.473 starting I/O failed 00:31:11.473 Read completed with error (sct=0, sc=8) 00:31:11.473 starting I/O failed 00:31:11.473 Read completed with error (sct=0, sc=8) 00:31:11.473 starting I/O failed 00:31:11.473 Read completed with error (sct=0, sc=8) 00:31:11.473 starting I/O failed 00:31:11.473 Write completed with error (sct=0, sc=8) 00:31:11.473 starting I/O failed 00:31:11.473 Read completed with error (sct=0, sc=8) 00:31:11.473 starting I/O failed 00:31:11.473 Read completed with error (sct=0, sc=8) 00:31:11.473 starting I/O failed 00:31:11.473 Read completed with error (sct=0, sc=8) 00:31:11.473 starting I/O failed 00:31:11.473 [2024-07-15 09:39:58.631547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:11.473 [2024-07-15 09:39:58.631998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.632044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.632329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.632358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.632730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.632768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.633245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.633332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.633778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.633816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.634311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.634397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.635035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.635123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.635553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.635588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.636062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.636149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.636592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.636625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.637044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.637074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.637452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.637479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.637884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.637914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.638206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.638239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.638502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.638528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.638880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.638908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.639221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.639246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.639598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.639624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.639860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.639889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.640257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.640285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.640535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.640561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.640940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.640967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.641324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.641351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.641602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.641633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.641989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.642018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.642406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.642433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.642953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.642992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.643350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.643361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.643575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.643585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.644114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.644152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.644504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.644516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.645016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.645055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.645424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.645435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.645746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-07-15 09:39:58.645763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-07-15 09:39:58.646078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.474 [2024-07-15 09:39:58.646089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.474 qpair failed and we were unable to recover it. 00:31:11.474 [2024-07-15 09:39:58.646405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.474 [2024-07-15 09:39:58.646414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.474 qpair failed and we were unable to recover it. 00:31:11.474 [2024-07-15 09:39:58.646728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.474 [2024-07-15 09:39:58.646738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.474 qpair failed and we were unable to recover it. 00:31:11.474 [2024-07-15 09:39:58.647088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.474 [2024-07-15 09:39:58.647098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.474 qpair failed and we were unable to recover it. 00:31:11.474 [2024-07-15 09:39:58.647326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.474 [2024-07-15 09:39:58.647335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.474 qpair failed and we were unable to recover it. 00:31:11.474 [2024-07-15 09:39:58.647659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.474 [2024-07-15 09:39:58.647669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.474 qpair failed and we were unable to recover it. 00:31:11.474 [2024-07-15 09:39:58.647998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.474 [2024-07-15 09:39:58.648009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.474 qpair failed and we were unable to recover it. 00:31:11.474 [2024-07-15 09:39:58.648351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.474 [2024-07-15 09:39:58.648361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.474 qpair failed and we were unable to recover it. 00:31:11.474 [2024-07-15 09:39:58.648660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.648671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-07-15 09:39:58.649001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.649013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-07-15 09:39:58.649300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.649309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-07-15 09:39:58.649647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.649656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-07-15 09:39:58.650025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.650035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-07-15 09:39:58.650365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.650375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-07-15 09:39:58.650740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.650749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-07-15 09:39:58.651055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.651065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-07-15 09:39:58.651327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.651336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-07-15 09:39:58.651639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.651648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-07-15 09:39:58.651994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.652009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-07-15 09:39:58.652314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.652324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-07-15 09:39:58.652679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.652689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-07-15 09:39:58.653015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.653025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-07-15 09:39:58.653377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.653386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-07-15 09:39:58.653681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.653691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-07-15 09:39:58.653886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.653896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-07-15 09:39:58.654225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.654234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-07-15 09:39:58.654583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.654592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-07-15 09:39:58.654805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-07-15 09:39:58.654817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.655164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.655173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.655470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.655480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.655760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.655770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.656088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.656097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.656470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.656479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.656817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.656827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.657156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.657165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.657513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.657522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.657779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.657788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.658137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.658146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.658442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.658452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.658642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.658652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.658988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.658997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.659391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.659400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.659621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.659631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.659802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.659811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.660106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.660116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.660441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.660453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.660680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.660690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.661002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.661012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.661354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.661364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.661608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.661618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.661839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.661850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.662174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.662183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.662507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.662517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.662738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.662747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.662961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.662972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.663250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.663260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.663453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.663463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.663792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.663802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.664125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.664134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.664431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.664441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.664760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.664769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.665058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.665068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.665407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.665416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.665715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.665724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.666037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.666047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.666358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.666368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.666678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.666688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.666990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.667001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-07-15 09:39:58.667294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-07-15 09:39:58.667303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.667596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.667606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.667925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.667935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.668231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.668247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.668618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.668629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.668807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.668817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.669186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.669195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.669503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.669513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.669835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.669845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.670195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.670204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.670528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.670538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.670865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.670874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.671209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.671218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.671551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.671560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.671898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.671908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.672248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.672258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.672591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.672600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.672943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.672952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.673281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.673290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.673630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.673639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.673983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.673993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.674348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.674357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.674677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.674686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.674960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.674969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.675309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.675318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.675650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.675659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.675999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.676009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.676356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.676365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.676718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.676727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.677052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.677062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.677400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.677410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.677771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.677781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.678102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.678111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.678471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.678480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.678828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.678838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.679157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.679166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.679504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.679513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.679858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.679868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.680267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.680276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.680619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.680628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.680978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.680988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-07-15 09:39:58.681386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-07-15 09:39:58.681395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.681620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.681629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.681907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.681917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.682247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.682256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.682597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.682608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.682920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.682930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.683245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.683254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.683516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.683525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.683747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.683761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.684042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.684051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.684340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.684349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.684669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.684679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.684993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.685003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.685340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.685350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.685573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.685582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.685911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.685921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.686220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.686230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.686549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.686558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.686875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.686886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.687200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.687209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.687519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.687528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.687836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.687846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.688184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.688193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.688526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.688535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.688874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.688884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.689221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.689231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.689597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.689606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.689903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.689913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.690229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.690238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.690549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.690559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.690885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.690895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.691204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.691215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.691531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.691540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.691851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.691861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.692211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.692220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.692532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.692543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.692867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.692877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-07-15 09:39:58.693192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-07-15 09:39:58.693201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.693524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.693540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.693872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.693882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.694275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.694284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.694613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.694622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.694977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.694987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.695324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.695334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.695530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.695539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.695829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.695839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.696236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.696245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.696523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.696532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.696847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.696858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.697187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.697198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.697520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.697529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.697884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.697895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.698205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.698214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.698533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.698543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.698882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.698891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.699231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.699240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.699560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.699569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.699882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.699892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.700276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.700288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.700581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.700591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.700914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.700924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.701262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.701271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.701596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.701605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.701922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.701932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.702257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.702266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.702517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.702526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.702743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.702758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.703086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.703096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.703414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.703424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.703800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.703810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.704104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-07-15 09:39:58.704114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-07-15 09:39:58.704435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.704445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.704795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.704805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.705125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.705134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.705397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.705406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.705796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.705805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.706196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.706206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.706420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.706430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.706607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.706617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.706943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.706952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.707126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.707136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.707523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.707532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.707870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.707886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.708184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.708193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.708503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.708513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.708855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.708864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.709190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.709200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.709526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.709535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.709832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.709842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.710175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.710185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.710481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.710491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.710832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.710842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.711157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.711167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.711404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.711413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.711763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.711773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.711958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.711968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.712286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.712296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.712646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.712656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.712973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.712984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.713321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.713332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.713681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.713691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.714015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.714024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.714349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.714359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.714712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.714721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.715077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.715087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.715390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.715400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.715745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.715759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.715985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.715995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.716285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.716295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.716635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.716644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.717010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.717020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.717351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-07-15 09:39:58.717360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-07-15 09:39:58.717656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.717665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.717971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.717981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.718346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.718355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.718653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.718662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.718988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.718997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.719322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.719332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.719676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.719685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.720037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.720047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.720380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.720389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.720681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.720690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.721033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.721042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.721379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.721389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.721590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.721601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.721775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.721786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.722124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.722135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.722450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.722460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.722786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.722795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.723130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.723140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.723485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.723494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.723848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.723858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.724148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.724157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.724515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.724524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.724849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.724859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.725175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.725184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.725418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.725427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.725818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.725828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.726161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.726170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.726486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.726495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.726576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.726586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.726873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.726882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.727232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.727249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.727565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.727575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.727771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.727783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.728106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.728115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.728448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.728457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.728794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.728804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.729045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.729055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.729364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.729374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.729783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.729794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.730097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.730107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.730430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-07-15 09:39:58.730440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-07-15 09:39:58.730762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.730774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.731090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.731099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.731293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.731302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.731593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.731602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.731804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.731813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.732166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.732175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.732512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.732521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.732837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.732847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.733203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.733212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.733431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.733441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.733761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.733770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.734094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.734104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.734422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.734431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.734760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.734770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.735105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.735114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.735476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.735486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.735808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.735817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.736108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.736117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.736442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.736451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.736777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.736787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.737117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.737126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.737437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.737446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.737847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.737856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.738162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.738172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.738432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.738441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.738759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.738776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.738984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.738993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.739366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.739375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.739700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.739710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.740050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.740059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.740371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.740380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.740678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.740688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.740913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.740923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.741260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.741269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.741619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.741629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.741978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.741988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.742310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.742320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.742513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.742522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.742714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.742723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.743029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.743039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.743374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-07-15 09:39:58.743383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-07-15 09:39:58.743681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.743691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.744019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.744029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.744328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.744337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.744717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.744725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.745062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.745071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.745269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.745278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.745604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.745613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.745933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.745943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.746264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.746273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.746616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.746625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.746955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.746964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.747275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.747285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.747575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.747585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.747935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.747944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.748138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.748147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.748489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.748498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.748820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.748830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.749153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.749162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.749483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.749493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.749814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.749824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.750111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.750120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.750468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.750478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.750798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.750808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.750918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.750934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.751240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.751249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.751471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.751488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.751810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.751820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.752199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.752210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.752437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.752447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.752822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.752832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.753140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.753149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.753490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.753500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.753824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.753833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.754144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.754153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.754486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.754496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.754798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.754807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.755151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.755160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.755515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.755524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.755885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.755894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.756217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.756226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.756417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-07-15 09:39:58.756427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-07-15 09:39:58.756773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.756783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.757153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.757162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.757339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.757349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.757593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.757602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.757933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.757943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.758262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.758272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.758601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.758610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.758940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.758950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.759290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.759299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.759612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.759622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.760017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.760027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.760344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.760353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.760692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.760702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.761030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.761042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.761350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.761360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.761684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.761694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.762015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.762025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.762361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.762370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.762557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.762567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.762906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.762915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.763251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.763260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.763481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.763490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.763813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.763822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.764159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.764168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.764364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.764373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.764604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.764613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.764951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.764961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.765289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.765299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.765647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.765656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.765961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.765971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.766293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.766302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.766528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.766537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.766853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.766862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.767164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.767173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.767487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-07-15 09:39:58.767496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-07-15 09:39:58.767832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.767841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.768180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.768190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.768512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.768521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.768742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.768758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.769076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.769085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.769420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.769432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.769768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.769778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.770099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.770108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.770491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.770501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.770842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.770851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.771157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.771166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.771491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.771500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.771822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.771831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.772184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.772193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.772422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.772432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.772762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.772772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.773138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.773148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.773441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.773451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.773768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.773779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.774093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.774103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.774430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.774442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.774818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.774829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.775228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.775237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.775585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.775594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.775835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.775844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.776179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.776188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.776519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.776528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.776840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.776849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.777095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.777104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.777453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.777462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.777779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.777789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.778111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.778120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.778433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.778442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.778669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.778678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.779036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.779046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.779394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.779403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.779698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.779707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.780016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.780026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.780351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.780360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.780676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.780685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.781042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.781052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-07-15 09:39:58.781365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-07-15 09:39:58.781374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.781689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.781698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.781974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.781984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.782320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.782329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.782637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.782646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.783059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.783068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.783386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.783395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.783716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.783725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.783913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.783924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.784245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.784254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.784547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.784557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.784873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.784882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.785069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.785078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.785407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.785416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.785727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.785736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.786069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.786079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.786416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.786425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.786758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.786767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.787101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.787110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.787420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.787430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.787720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.787728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.788050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.788060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.788419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.788428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.788802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.788812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.789112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.789121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.789337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.789345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.789656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.789665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.789869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.789878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.790219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.790228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.790552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.790561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.790939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.790948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.791294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.791303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.791642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.791653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.791958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.791969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.792303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.792312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.792506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.792516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.792836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.792846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.793166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.793176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.793516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.793525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.793870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.793880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.794166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.794176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-07-15 09:39:58.794489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-07-15 09:39:58.794498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.794799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.794808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.795126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.795135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.795473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.795482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.795806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.795816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.796137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.796146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.796464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.796473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.796808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.796818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.797046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.797055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.797216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.797225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.797438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.797447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.797758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.797767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.798083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.798092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.798403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.798412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.798712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.798722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.799061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.799070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.799297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.799306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.799610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.799619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.799915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.799927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.800228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.800237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.800460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.800468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.800784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.800793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.801185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.801194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.801443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.801451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.801839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.801848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.802237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.802246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.802552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.802562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.802917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.802926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.803253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.803263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.803588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.803596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.803902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.803912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.804222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.804231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.804435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.804445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.804811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.804821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.805191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.805200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.805494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.805503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.805830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.805839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.806155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.806164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.806490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.806499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.806838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.806848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.807185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.807194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.807508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-07-15 09:39:58.807517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-07-15 09:39:58.807832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.807842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.808177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.808185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.808514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.808523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.808863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.808873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.809187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.809197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.809505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.809514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.809824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.809834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.810146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.810155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.810449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.810458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.810800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.810809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.811214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.811223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.811545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.811554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.811841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.811850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.812026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.812036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.812373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.812382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.812593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.812602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.812817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.812827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.813143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.813153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.813440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.813450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.813768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.813778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.813992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.814001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.814317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.814326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.814517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.814527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.814844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.814854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.815097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.815106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.815460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.815469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.815760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.815770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.816065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.816074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.816391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-07-15 09:39:58.816401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-07-15 09:39:58.816625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.816635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.816854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.816863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.817201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.817210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.817550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.817559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.817900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.817911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.818222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.818231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.818581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.818590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.818928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.818937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.819310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.819319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.819635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.819645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.819910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.819920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.820262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.820272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.820604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.820613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.820922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.820931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.821246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.821255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.821568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.821580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.821916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.821925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.822261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.822270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.822607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.822616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.822953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.822963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.823260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.823270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.823421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.823431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.823730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.823739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.824045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.824055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.824396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.824405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.824735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.824744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.825088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.825098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.825460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.825469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.825758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.825768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.826098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.826108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.826426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.826435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.826750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.826764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.827070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.827080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.827367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.827376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.827706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.827715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.828025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.828036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.828327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.828336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.828610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.828619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.828792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.828802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.829117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.829126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.829458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.829468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-07-15 09:39:58.829790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-07-15 09:39:58.829800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.830114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.830126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.830500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.830509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.830806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.830817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.831201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.831209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.831532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.831541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.831867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.831877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.832182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.832192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.832385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.832395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.832674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.832683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.832907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.832917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.833241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.833251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.833537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.833548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.833874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.833883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.834193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.834203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.834544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.834553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.834863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.834872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.835239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.835248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.835541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.835551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.835890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.835899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.836289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.836299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.836620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.836629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.836937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.836946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.837252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.837261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.837577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.837586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.837942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.837951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.838296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.838305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.838594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.838603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.838890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.838901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.839224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.839233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.839545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.839554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.839891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.839901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.840206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.840216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.840539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.840548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.840801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.840810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.841161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.841171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.841391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.841400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.841729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.841738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.842056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.842066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.842356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.842365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.842634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.842643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.842930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.771 [2024-07-15 09:39:58.842940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.771 qpair failed and we were unable to recover it. 00:31:11.771 [2024-07-15 09:39:58.843260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.843270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.843605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.843614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.844010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.844019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.844321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.844331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.844660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.844669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.844990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.845000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.845311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.845320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.845727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.845736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.845995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.846004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.846305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.846315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.846508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.846517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.846848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.846857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.847168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.847177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.847472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.847481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.847868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.847878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.848219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.848229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.848541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.848550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.848888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.848898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.849302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.849311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.849608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.849618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.849985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.849994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.850299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.850309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.850638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.850648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.850877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.850887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.851236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.851245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.851558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.851568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.851886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.851895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.852202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.852212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.852518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.852527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.852824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.852839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.853155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.853164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.853554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.853563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.853879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.853889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.854230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.854239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.854555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.854564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.854820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.854837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.855212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.855222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.855539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.855548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.855858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.855868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.856188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.856197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.856517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.772 [2024-07-15 09:39:58.856526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.772 qpair failed and we were unable to recover it. 00:31:11.772 [2024-07-15 09:39:58.856862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.856872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.857197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.857207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.857559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.857569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.857892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.857902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.858243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.858252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.858443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.858453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.858797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.858806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.859122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.859131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.859469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.859478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.859805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.859815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.860136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.860145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.860436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.860446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.860795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.860804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.861093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.861105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.861329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.861338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.861676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.861685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.861972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.861981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.862350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.862359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.862654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.862664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.863036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.863046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.863349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.863359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.863684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.863694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.864012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.864021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.864346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.864355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.864722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.864731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.865062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.865071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.865362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.865372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.865769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.865779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.866094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.866103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.866292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.866301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.866590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.866599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.866914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.866924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.867224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.867234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.867576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.867585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.867906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.867916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.868231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.868241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.773 qpair failed and we were unable to recover it. 00:31:11.773 [2024-07-15 09:39:58.868537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.773 [2024-07-15 09:39:58.868547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.868854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.868864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.869196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.869205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.869527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.869535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.869947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.869958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.870251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.870261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.870582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.870591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.870915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.870931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.871243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.871252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.871563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.871573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.871887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.871897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.872217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.872227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.872565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.872574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.872951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.872961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.873176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.873185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.873568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.873577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.873917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.873928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.874258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.874267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.874582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.874593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.874907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.874917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.875216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.875227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.875547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.875556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.875869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.875884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.876199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.876208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.876555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.876564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.876897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.876906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.877280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.877290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.877604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.877614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.877841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.877850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.878171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.878180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.878490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.878499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.878811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.878822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.879171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.879180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.879490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.879501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.774 [2024-07-15 09:39:58.879825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.774 [2024-07-15 09:39:58.879834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.774 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.880151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.880160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.880495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.880504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.880820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.880835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.881151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.881160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.881549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.881558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.881864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.881873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.882191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.882200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.882513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.882523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.882849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.882859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.883172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.883181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.883499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.883508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.883858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.883868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.884206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.884215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.884559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.884569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.884890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.884899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.885170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.885179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.885508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.885517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.885856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.885866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.886187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.886196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.886519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.886529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.886849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.886859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.887151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.887161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.887479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.887487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.887798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.887815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.888137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.888147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.888483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.888493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.888821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.888830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.889124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.889134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.889454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.889463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.889854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.889865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.890172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.890182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.890415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.890424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.890746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.890760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.891147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.891157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.891452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.891462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.891784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.891794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.892111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.892122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.892466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.892477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.892776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.892793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.893130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.893139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.893504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.775 [2024-07-15 09:39:58.893513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.775 qpair failed and we were unable to recover it. 00:31:11.775 [2024-07-15 09:39:58.893811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.893821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.894176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.894185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.894493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.894502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.894818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.894827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.895149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.895159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.895480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.895489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.895672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.895682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.895887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.895897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.896091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.896101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.896424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.896433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.896835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.896845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.897158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.897168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.897491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.897500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.897813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.897823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.898028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.898037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.898330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.898339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.898639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.898648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.898850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.898860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.899250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.899259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.899555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.899564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.899905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.899914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.900232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.900242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.900563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.900572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.900856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.900867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.901196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.901205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.901528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.901538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.901939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.901949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.902349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.902359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.902672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.902681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.903006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.903016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.903326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.903336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.903693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.903702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.904048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.904059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.904293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.904302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.904636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.904645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.904875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.904885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.905220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.905228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.905565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.905574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.905910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.905919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.906253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.906263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.906560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.776 [2024-07-15 09:39:58.906569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.776 qpair failed and we were unable to recover it. 00:31:11.776 [2024-07-15 09:39:58.906894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.906903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.907219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.907228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.907544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.907553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.907841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.907851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.908168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.908177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.908492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.908502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.908818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.908828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.909015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.909024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.909375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.909384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.909694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.909706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.910017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.910027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.910365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.910375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.910777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.910787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.911147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.911157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.911491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.911501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.911838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.911850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.912073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.912092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.912430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.912449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.912625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.912639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.912961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.912984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.913390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.913411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.913836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.913850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.914211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.914221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffaa50 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Write completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Write completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Write completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Write completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Write completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Write completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Write completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Write completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Read completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Write completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 Write completed with error (sct=0, sc=8) 00:31:11.777 starting I/O failed 00:31:11.777 [2024-07-15 09:39:58.914407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:11.777 [2024-07-15 09:39:58.914732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.914743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.915071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.915079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.915409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.915417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.915755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.915763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.915964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.915973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.916315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.916322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.916622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.916634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.777 qpair failed and we were unable to recover it. 00:31:11.777 [2024-07-15 09:39:58.916801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.777 [2024-07-15 09:39:58.916809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.917132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.917138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.917372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.917378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.917681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.917688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.917904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.917910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.918266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.918273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.918586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.918593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.918880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.918887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.919171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.919178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.919496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.919503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.919820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.919827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.920160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.920167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.920482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.920489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.920850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.920857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.921129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.921135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.921511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.921518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.921834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.921840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.922044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.922050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.922255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.922262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.922534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.922540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.922852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.922859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.923246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.923253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.923431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.923438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.923713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.923719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.924068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.924075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.924388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.924394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.924700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.924709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.925003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.925010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.925313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.925320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.925607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.925613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.925833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.925840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.926145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.926152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.926474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.926481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.926854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.926861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.927161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.778 [2024-07-15 09:39:58.927169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.778 qpair failed and we were unable to recover it. 00:31:11.778 [2024-07-15 09:39:58.927461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.927467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.927792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.927799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.927968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.927975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.928324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.928331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.928534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.928541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.928845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.928852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.929198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.929205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.929524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.929531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.929839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.929845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.930150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.930157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.930491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.930497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.930819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.930826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.931152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.931159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.931356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.931363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.931700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.931707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.932019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.932026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.932333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.932340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.932754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.932763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.933095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.933102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.933312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.933319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.933608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.933615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.933894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.933901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.934098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.934104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.934416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.934423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.934762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.934769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.935095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.935101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:11.779 [2024-07-15 09:39:58.935445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.779 [2024-07-15 09:39:58.935452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:11.779 qpair failed and we were unable to recover it. 00:31:12.064 [2024-07-15 09:39:58.935786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-07-15 09:39:58.935794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-07-15 09:39:58.936108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-07-15 09:39:58.936116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-07-15 09:39:58.936429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-07-15 09:39:58.936436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-07-15 09:39:58.936664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-07-15 09:39:58.936670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-07-15 09:39:58.936865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-07-15 09:39:58.936875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-07-15 09:39:58.937172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-07-15 09:39:58.937178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-07-15 09:39:58.937567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-07-15 09:39:58.937574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-07-15 09:39:58.937915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-07-15 09:39:58.937923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-07-15 09:39:58.938247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-07-15 09:39:58.938253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-07-15 09:39:58.938619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-07-15 09:39:58.938626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-07-15 09:39:58.938966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-07-15 09:39:58.938973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-07-15 09:39:58.939293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-07-15 09:39:58.939300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-07-15 09:39:58.939690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-07-15 09:39:58.939696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-07-15 09:39:58.940042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.940049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.940362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.940369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.940537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.940544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.940856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.940863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.941166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.941173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.941485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.941491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.941814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.941822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.942118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.942125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.942489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.942495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.942721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.942727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.943034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.943041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.943236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.943243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.943570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.943577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.943902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.943909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.944226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.944233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.944551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.944557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.944867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.944874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.945195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.945202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.945501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.945508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.945748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.945761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.946152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.946158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.946355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.946361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.946646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.946653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.947028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.947035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.947341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.947348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.947645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.947651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.947936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.947943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.948268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.948274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.948573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.948580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.948806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.948812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.949120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.949127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.949339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.949347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.949658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.949664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.949922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.949929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.950113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.950120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.950516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.950522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.950860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.950867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.951202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.951209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.951357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.951364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.951682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.951689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-07-15 09:39:58.952020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-07-15 09:39:58.952027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.952328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.952334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.952641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.952647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.952844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.952851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.953173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.953180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.953397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.953404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.953739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.953745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.954084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.954091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.954366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.954373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.954692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.954698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.955003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.955011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.955326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.955333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.955529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.955536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.955855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.955862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.956213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.956220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.956522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.956528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.956912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.956919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.957236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.957242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.957437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.957444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.957761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.957768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.958100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.958107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.958301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.958308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.958534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.958542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.958888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.958895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.959209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.959216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.959549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.959555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.959866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.959873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.960193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.960200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.960500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.960507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.960807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.960814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.961157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.961172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.961579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.961588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.961886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.961894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.962222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.962229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.962532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.962539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.962733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.962740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.963102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.963109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.963416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.963423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.963740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.963748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.964124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.964131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.964449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.964456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-07-15 09:39:58.964756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-07-15 09:39:58.964763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.965152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.965158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.965349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.965355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.965769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.965775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.965984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.965991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.966313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.966320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.966545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.966552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.966881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.966888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.967064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.967070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.967402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.967409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.967725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.967732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.967928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.967935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.968342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.968350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.968670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.968677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.968966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.968973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.969294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.969300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.969604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.969610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.969932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.969939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.970246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.970253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.970645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.970652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.970959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.970966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.971286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.971292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.971500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.971506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.971831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.971838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.972164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.972171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.972486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.972493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.972856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.972863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.973130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.973136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.973444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.973450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.973749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.973758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.974078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.974087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.974385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.974392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.974782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.974789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.975053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.975060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.975376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.975383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.975684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.975691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.975979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.975986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.976301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.976308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.976382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.976389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.976665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.976672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.977058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-07-15 09:39:58.977065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-07-15 09:39:58.977369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.977376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.977635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.977642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.977989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.977995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.978335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.978342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.978680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.978688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.979016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.979024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.979219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.979226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.979526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.979533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.979725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.979732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.980051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.980058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.980407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.980415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.980726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.980734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.981116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.981125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.981425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.981431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.981736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.981742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.982048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.982055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.982388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.982396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.982584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.982591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.982872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.982883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.983258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.983272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.983619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.983626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.983825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.983834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.984167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.984174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.984504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.984514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.984748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.984766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.985089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.985096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.985415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.985421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.985741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.985759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.986085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.986097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.986439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.986450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.986790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.986797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.987110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.987116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.987419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.987427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-07-15 09:39:58.987755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-07-15 09:39:58.987769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.988137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.988146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.988544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.988551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.988768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.988775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.989096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.989103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.989402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.989409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.989773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.989780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.990130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.990138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.990437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.990443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.991173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.991189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.991469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.991477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.991787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.991797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.992189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.992207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.992522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.992530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.992843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.992851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.993034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.993041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.993339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.993345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.993691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.993698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.993788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.993795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.994100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.994107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.994415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.994422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.994754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.994763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.995029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.995036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.995383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.995390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.995748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.995761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.996071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.996079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.996344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.996350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.996589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.996595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.996818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.996826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.997175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.997181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.997450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.997456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.997603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.997610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.997800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.997807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.998224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.998230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.998530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.998536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.998843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.998850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.999158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.999166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.999443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.999450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:58.999742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:58.999750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:59.000097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:59.000104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-07-15 09:39:59.000395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-07-15 09:39:59.000401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.000603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.000610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.000808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.000815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.001089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.001095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.001439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.001446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.001767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.001774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.002223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.002230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.002588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.002594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.002926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.002932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.003274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.003281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.003357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.003364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.003667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.003674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.004026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.004033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.004247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.004254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.004553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.004559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.004913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.004919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.005251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.005257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.005560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.005567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.005810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.005817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.006151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.006157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.006223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.006229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.006438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.006445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.006759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.006765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.007002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.007009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.007356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.007363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.007695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.007701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.007993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.008001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.008327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.008334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.008659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.008666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.008815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.008822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.009068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.009074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.009394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.009401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.009695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.009702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.009948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.009954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.010262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.010268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.010584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.010591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.010826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.010834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.011055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.011062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.011398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.011405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.011758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.011766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.011963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.011970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-07-15 09:39:59.012338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-07-15 09:39:59.012345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.012654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.012660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.012930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.012936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.013290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.013296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.013470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.013477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.013811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.013817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.014139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.014146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.014343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.014350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.014633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.014640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.014855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.014862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.015193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.015200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.015499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.015506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.015842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.015848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.016163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.016171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.016368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.016374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.016484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.016490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.016790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.016796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.017115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.017123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.017426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.017434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.017622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.017629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.017917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.017923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.018227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.018234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.018554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.018561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.018897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.018905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.019245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.019252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.019576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.019583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.019842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.019848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.020151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.020158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.020343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.020350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.020576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.020583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.020899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.020906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.021198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.021205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.021552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.021558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.021846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.021853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.022182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.022188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.022480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.022488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.022808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.022815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.023044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.023050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.023367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.023373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.023696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.023702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.023870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-07-15 09:39:59.023878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-07-15 09:39:59.024058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.024064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.024369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.024375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.024575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.024582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.024772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.024779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.025090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.025096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.025370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.025376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.025667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.025673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.025888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.025895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.026226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.026232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.026561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.026568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.026851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.026857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.026893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.026900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.027270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.027278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.027615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.027622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.027846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.027853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.028154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.028161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.028330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.028337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.028609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.028615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.028944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.028951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.029294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.029300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.029601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.029608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.029931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.029938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.030126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.030132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.030461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.030467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.030700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.030706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.031018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.031024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.031261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.031268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.031568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.031574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.031763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.031770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.032085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.032092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.032328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.032334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.032638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.032644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.032948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.032955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.033211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.033217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.033520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.033528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.033796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.033802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.034139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-07-15 09:39:59.034145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-07-15 09:39:59.034446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.034453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.034635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.034642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.034862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.034869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.035201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.035208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.035409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.035417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.035823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.035830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.036123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.036130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.036463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.036470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.036669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.036676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.036924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.036931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.037256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.037262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.037480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.037487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.037859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.037866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.038174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.038182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.038560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.038566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.038879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.038886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.039111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.039118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.039307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.039313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.039541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.039547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.039845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.039853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.040201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.040207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.040435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.040442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.040531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.040538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.040861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.040867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.041141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.041147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.041449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.041455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.041806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.041813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.042145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.042151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.042308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.042315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.042506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.042513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.042675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.042683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.042782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.042789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-07-15 09:39:59.043152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-07-15 09:39:59.043159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.043461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.043468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.043810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.043817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.044055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.044061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.044394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.044400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.044723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.044730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.044805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.044812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.045098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.045104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.045275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.045282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.045561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.045568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.045955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.045961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.046335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.046341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.046675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.046682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.046969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.046976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.047303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.047309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.047629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.047636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.047958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.047965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.048285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.048291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.048590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.048596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.048769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.048776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.049067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.049073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.049283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.049289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.049625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.049631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.049936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.049943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.050138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.050145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.050425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.050431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.050769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.050776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.050992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.050999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.051229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.051236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.051417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.051425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.051782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.051790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.052123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.052129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.052413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.052422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.052754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.052761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.053073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.053079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.053395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.053401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.053595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.053601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.053802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.053809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.054135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.054141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.054351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.054357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.054656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-07-15 09:39:59.054663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-07-15 09:39:59.054972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.054979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.055309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.055315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.055623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.055638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.055955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.055962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.056314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.056321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.056653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.056659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.056968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.056974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.057312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.057318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.057542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.057548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.057705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.057712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.058018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.058025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.058355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.058361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.058670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.058677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.058980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.058987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.059293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.059300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.059646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.059653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.059897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.059903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.060262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.060269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.060505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.060512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.060831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.060837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.061135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.061141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.061508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.061514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.061903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.061909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.062150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.062157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.062482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.062489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.062704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.062710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.062939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.062946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.063270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.063276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.063562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.063569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.063903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.063909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.064119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.064125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.064420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.064429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.064614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.064621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.064929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.064935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.065336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.065342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.065608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.065615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.065937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.065944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.066265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.066271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.066484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.066490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.066815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.066822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.067200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-07-15 09:39:59.067206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-07-15 09:39:59.067501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.067508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.067660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.067667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.067999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.068006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.068333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.068339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.068689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.068696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.069035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.069042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.069343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.069350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.069664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.069670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.069988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.069994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.070367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.070374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.070691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.070698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.071018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.071026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.071346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.071353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.071501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.071508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.071631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.071638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.071969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.071976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.072304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.072311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.072632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.072638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.072954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.072962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.073284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.073290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.073605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.073611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.073817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.073823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.074143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.074150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.074443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.074450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.074649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.074655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.074979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.074985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.075312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.075318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.075539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.075545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.075917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.075924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.076136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.076142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.076407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.076415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.076594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.076600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.077063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.077070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-07-15 09:39:59.077380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-07-15 09:39:59.077387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.077622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.077629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.077942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.077948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.078281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.078295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.078617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.078623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.078808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.078814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.079230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.079237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.079431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.079438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.079793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.079800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.079988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.079994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.080426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.080433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.080753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.080760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.081034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.081040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.081342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.081348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.081666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.081673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.081974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.081980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.082293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.082300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.082681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.082688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.083014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.083021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.083339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.083346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.083649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.083656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.083859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.083866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.084094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.084100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.084461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.084468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.084778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.084784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.085103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.085109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.085423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.085430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.085681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.085687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.086128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.086135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.086452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.086458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.086770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.086776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.087006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.087013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.087212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.087218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-07-15 09:39:59.087553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-07-15 09:39:59.087559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.087881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.087887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.088115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.088121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.088330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.088337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.088663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.088671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.088992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.088999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.089318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.089324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.089631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.089637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.089861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.089868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.090188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.090194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.090496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.090503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.090841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.090848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.091140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.091147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.091426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.091432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.091728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.091735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.091952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.091959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.092236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.092243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.092568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.092574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.092888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.092895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.093220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.093227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.093471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.093478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.093814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.093821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.094025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.094031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.094335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.094341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.094647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.094654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.094994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.095000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.095301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.095308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.095503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.095510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.095825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.095832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.096152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.096158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.096449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.096456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.096846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.096853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.097093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.097100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.097424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.097431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.097735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.097742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.098086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.098092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.098468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.098476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.098794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.098801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.099128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.099135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.099429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.099442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.099767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.099774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-07-15 09:39:59.100151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-07-15 09:39:59.100157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.100481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.100487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.100789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.100796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.101102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.101111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.101293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.101300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.101514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.101520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.101904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.101911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.102197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.102203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.102522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.102529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.102833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.102839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.103051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.103058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.103358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.103364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.103686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.103692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.104057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.104064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.104289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.104295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.104596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.104603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.104901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.104908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.105236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.105243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.105430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.105436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.105597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.105604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.105768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.105775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.106066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.106072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.106308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.106314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.106640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.106646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.106933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.106940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.107262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.107269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.107573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.107579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.107899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.107906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.108290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.108297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.108615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.108622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.108943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.108950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.109266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.109273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.109559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.109566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.109820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.109827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.110158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.110165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.110477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.110484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.110807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.110813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.110983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.110991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.111370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.111377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.111715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.111723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.112051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-07-15 09:39:59.112058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-07-15 09:39:59.112360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.112366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.112564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.112571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.112927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.112935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.113143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.113149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.113540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.113547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.113724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.113731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.114043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.114050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.114373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.114379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.114493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.114500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.114712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.114718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.115000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.115007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.115335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.115342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.115640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.115648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.115830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.115837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.116180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.116187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.116504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.116510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.116898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.116905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.117220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.117227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.117550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.117556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.117837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.117843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.118179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.118185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.118521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.118527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.118808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.118814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.119183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.119190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.119489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.119496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.119846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.119854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.120098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.120105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.120309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.120316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.120641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.120648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.120984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.120991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.121291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.121298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.121599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.121607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.121916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.121922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.122239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.122246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.122390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.122397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.122755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.122762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.123063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.123070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.123411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.123419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.123771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.123778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.124092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.124098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-07-15 09:39:59.124417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-07-15 09:39:59.124423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.124735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.124742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.125121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.125129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.125326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.125333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.125676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.125682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.125968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.125975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.126291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.126298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.126633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.126639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.127055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.127062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.127417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.127424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.127737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.127744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.127940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.127946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.128243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.128251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.128568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.128574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.128891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.128898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.129230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.129236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.129537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.129544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.129737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.129746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.130054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.130061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.130251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.130257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.130602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.130609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.130909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.130915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.131233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.131239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.131547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.131554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.131914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.131922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.132275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.132282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.132576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.132584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.132887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.132894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.133226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.133232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.133545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.133551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.133878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.133885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.134208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.134222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.134536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.134542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.134848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.134855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.135013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.135021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.135250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.135257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.135557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.135564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.135913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.135919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.136241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.136248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.136568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.136574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.136941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.136947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-07-15 09:39:59.137265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-07-15 09:39:59.137272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.137558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.137566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.137941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.137948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.138139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.138145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.138441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.138448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.138790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.138797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.139131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.139137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.139442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.139449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.139742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.139748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.140122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.140130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.140466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.140473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.140831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.140838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.141138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.141145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.141458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.141464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.141846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.141853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.142165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.142171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.142513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.142519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.142898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.142905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.143067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.143074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.143287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.143294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.143607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.143613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.143919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.143925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.144243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.144250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.144555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.144562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.144867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.144874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.145186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.145193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.145506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.145513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.145607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.145614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.145806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.145813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.146097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.146103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.146402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.146409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.146735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.146742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.146972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.146979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.147307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.147313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.147615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-07-15 09:39:59.147623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-07-15 09:39:59.147929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.147936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.148246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.148253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.148532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.148539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.148850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.148857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.149177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.149183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.149485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.149492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.149689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.149697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.150012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.150018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.150321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.150328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.150642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.150648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.150931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.150938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.151256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.151263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.151470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.151477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.151788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.151795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.152088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.152095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.152306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.152313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.152526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.152532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.152933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.152940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.153167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.153174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.153552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.153558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.153943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.153950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.154282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.154289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.154458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.154465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.154798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.154805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.155121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.155128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.155445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.155451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.155663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.155669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.155987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.155994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.156333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.156340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.156663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.156670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.156986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.156994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.157289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.157295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.157595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.157602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.157946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.157953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.158233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.158239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.158574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.158581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.158882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.158894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.159212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.159219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.159520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.159527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.159831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.159838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.160137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-07-15 09:39:59.160144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-07-15 09:39:59.160443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.160450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.160749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.160766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.160957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.160963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.161342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.161348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.161639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.161645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.161975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.161984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.162284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.162291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.162590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.162597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.162787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.162794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.163094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.163100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.163396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.163403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.163719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.163726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.164022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.164029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.164344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.164350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.164652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.164659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.164897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.164903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.165237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.165244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.165556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.165562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.165835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.165842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.166158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.166164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.166542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.166549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.166862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.166868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.167167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.167174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.167510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.167516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.167819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.167826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.168147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.168154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.168454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.168461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.168776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.168783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.169114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.169121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.169440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.169446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.169641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.169648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.169921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.169928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.170260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.170267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.170574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.170580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.170915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.170921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.171238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.171244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.171553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.171560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.171856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.171863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.172204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.172211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.172408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-07-15 09:39:59.172414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-07-15 09:39:59.172635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.172641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.172934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.172941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.173282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.173288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.173609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.173616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.173930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.173937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.174110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.174118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.174427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.174434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.174754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.174761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.175046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.175053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.175283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.175290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.175594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.175602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.175942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.175949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.176326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.176333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.176636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.176642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.176934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.176940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.177129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.177135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.177505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.177511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.177835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.177842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.178175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.178181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.178452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.178459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.178761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.178768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.178961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.178968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.179285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.179292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.179603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.179611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.179692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.179699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.179998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.180006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.180332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.180339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.180642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.180649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.180933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.180940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.181267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.181274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.181615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.181623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.181964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.181971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.182312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.182320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.182714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.182721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.183038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.183046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.183221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.183228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.183502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.183509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.183826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.183833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.184195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.184202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.184502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.184509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.184810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-07-15 09:39:59.184817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-07-15 09:39:59.185114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.185121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.185433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.185440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.185744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.185752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.186079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.186086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.186389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.186397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.186708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.186715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.187024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.187032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.187191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.187197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.187481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.187487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.187653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.187660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.187978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.187985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.188296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.188304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.188616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.188624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.188978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.188985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.189216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.189223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.189524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.189530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.189741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.189747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.190076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.190083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.190380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.190387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.190700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.190707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.191011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.191018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.191335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.191341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.191574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.191581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.191813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.191820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.192153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.192160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.192366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.192373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.192702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.192709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.193009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.193016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.193311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.193318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.193628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.193634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.193937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.193944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.194268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.194274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.194651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.194658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.195005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.195012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.195307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-07-15 09:39:59.195313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-07-15 09:39:59.195532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.195538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.195898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.195905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.196221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.196228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.196521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.196529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.196846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.196853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.197177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.197183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.197473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.197480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.197814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.197821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.198099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.198106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.198294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.198300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.198620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.198627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.198918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.198925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.199285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.199291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.199585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.199592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.199904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.199910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.200212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.200219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.200471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.200478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.200822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.200829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.200975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.200983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.201335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.201341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.201652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.201659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.201984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.201991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.202308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.202315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.202662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.202669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.202984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.202991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.203057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.203063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.203368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.203375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.203568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.203575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.203921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.203928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.204127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.204133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.204498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.204505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.204841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.204847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.205166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.205173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.205368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.205374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.205695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.205701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.206018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.206025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.206366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.206374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.206736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.206743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.207063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.207070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.207253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.207259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-07-15 09:39:59.207544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-07-15 09:39:59.207552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.207857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.207863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.208055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.208062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.208287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.208294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.208638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.208645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.208979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.208986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.209329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.209336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.209657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.209664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.209990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.209997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.210184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.210191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.210502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.210509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.210687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.210694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.211000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.211007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.211325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.211333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.211655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.211663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.211831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.211839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.212151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.212158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.212497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.212504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.212843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.212849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.213181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.213195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.213528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.213535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.213846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.213853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.214189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.214195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.214515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.214529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.214754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.214760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.215049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.215056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.215378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.215385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.215688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.215696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.216030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.216037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.216353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.216359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.216554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.216560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.216888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.216895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.217246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.217252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.217578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.217584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.217893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.217900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.218242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.218249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.218549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.218557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.218862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.218868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.219187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.219193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.219387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.219394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-07-15 09:39:59.219702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-07-15 09:39:59.219709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.220032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.220040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.220390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.220397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.220788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.220795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.221096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.221103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.221395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.221408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.221732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.221739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.222037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.222045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.222360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.222367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.222679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.222686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.223061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.223068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.223385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.223391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.223753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.223760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.224150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.224157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.224451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.224459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.224640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.224647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.224948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.224955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.225278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.225284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.225587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.225593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.225936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.225943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.226259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.226266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.226584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.226591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.226821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.226827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.227037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.227044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.227347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.227354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.227674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.227682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.227987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.227994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.228375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.228382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.228577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.228585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.229389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.229404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.229700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.229708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.230330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.230344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.230638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.230645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.231492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.231508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.231793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.231801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.232539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.232554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.232923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.232933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.233265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.233272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.233457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.233463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.233637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.233643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.233850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-07-15 09:39:59.233856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-07-15 09:39:59.234186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.234192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.234510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.234517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.234789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.234795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.235375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.235389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.235691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.235699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.235909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.235916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.236309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.236316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.236630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.236637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.236929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.236936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.237253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.237260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.237479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.237486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.237788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.237795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.238151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.238158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.238484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.238490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.238808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.238814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.239184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.239190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.239544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.239550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.239837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.239844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.240171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.240177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.240384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.240391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.240703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.240710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.241013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.241020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.241319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.241326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.241607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.241614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-07-15 09:39:59.241938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-07-15 09:39:59.241944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.373 [2024-07-15 09:39:59.242252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-07-15 09:39:59.242268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-07-15 09:39:59.242601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-07-15 09:39:59.242607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-07-15 09:39:59.242826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-07-15 09:39:59.242832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-07-15 09:39:59.243052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-07-15 09:39:59.243060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.243348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.243354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.243681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.243688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.244024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.244031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.244343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.244350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.244679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.244685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.244878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.244884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.245218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.245226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.245414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.245421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.245628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.245635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.245956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.245963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.246339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.246346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.246654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.246661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.246900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.246907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.247217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.247224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.247539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.247546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.247884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.247892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.248078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.248085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.248411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.248417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.248808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.248815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.249150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.249156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.249508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.249514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.249706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.249712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.250030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.250037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.250329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.250335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.250642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.250650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.250990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.250998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.251183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.251189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.251469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.251475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.251713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.251719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.252041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.252048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.252357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.252363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.252685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.252691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.253047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.253054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.253357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.253364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.253570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.253582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.253858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.253865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.254174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.254180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.254378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.254384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-07-15 09:39:59.254693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-07-15 09:39:59.254700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.255042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.255048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.255332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.255338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.255520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.255528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.255722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.255729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.256068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.256075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.256397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.256404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.256613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.256620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.256968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.256975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.257295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.257302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.257671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.257678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.257978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.257985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.258309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.258316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.258663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.258669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.258966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.258972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.259306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.259312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.259618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.259624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.259933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.259940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.260261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.260268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.260455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.260461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.260672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.260679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.261046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.261052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.261385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.261391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.261692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.261698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.261937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.261944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.262291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.262297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.262601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.262608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.262947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.262954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.263343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.263349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.263668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.263675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.264008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-07-15 09:39:59.264015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-07-15 09:39:59.264318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.264324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.264623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.264630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.264979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.264985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.265293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.265299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.265622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.265628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.265916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.265923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.266093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.266100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.266423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.266429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.266733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.266739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.267046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.267052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.267313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.267319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.267642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.267648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.267936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.267943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.268264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.268270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.268432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.268439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.268768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.268775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.269072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.269078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.269320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.269328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.269539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.269545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.269879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.269886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.270176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.270182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.270482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.270489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.270859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.270865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.271170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.271177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.271477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.271484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.271822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.271829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.272002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.272008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.272342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.272349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.272674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.272681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.272959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.272965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.273293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.273299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.273605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.273612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.273933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.273940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.274244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.274250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.274441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.274447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.274592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.274598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.274799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.274805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.275142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.275148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.275477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.275484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.275804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.275811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-07-15 09:39:59.276147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-07-15 09:39:59.276153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.276497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.276504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.276814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.276820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.277038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.277045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.277375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.277382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.277695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.277702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.278002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.278009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.278329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.278336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.278619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.278626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.278987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.279001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.279357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.279363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.279614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.279620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.279934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.279940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.280138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.280144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.280463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.280470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.280806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.280812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.281257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.281264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.281639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.281646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.281881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.281887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.282123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.282130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.282457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.282464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.282794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.282801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.282948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.282954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.283172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.283178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.283471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.283477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.283683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.283689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.284049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.284055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.284374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.284380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.284677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.284684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.284994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.285001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.285331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.285337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.285659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.285666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.286036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-07-15 09:39:59.286042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-07-15 09:39:59.286357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.286364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.286673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.286679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.286985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.286992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.287315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.287322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.287631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.287638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.287947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.287954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.288270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.288277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.288594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.288601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.288825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.288832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.289154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.289160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.289483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.289489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.289807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.289814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.290130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.290137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.290433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.290440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.290760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.290767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.291045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.291052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.291283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.291289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.291567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.291573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.291920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.291927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.292224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.292237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.292584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.292591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.292898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.292906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.293216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.293223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.293460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.293466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.293619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.293627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.293852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.293859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.294075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-07-15 09:39:59.294081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-07-15 09:39:59.294388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.294394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.294646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.294652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.295063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.295070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.295402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.295409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.295807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.295814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.296112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.296118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.296453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.296459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.296657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.296664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.296842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.296849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.297176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.297183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.297534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.297540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.297842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.297848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.298192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.298198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.298526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.298533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.298754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.298761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.299069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.299075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.299359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.299366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.299692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.299699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.299773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.299783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.300118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.300125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.300461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.300467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.300733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.300738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.300940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.300948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.301265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.301271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.301617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.301631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.301950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.301958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.302264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.302270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.302486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.302492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.302811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.302817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.303044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.303050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.303423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.303430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.303646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.303653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.303964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.303971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.304300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.304306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.304611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.304617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.304895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.304902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.305196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.305202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.305503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.305510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.305768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.305775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-07-15 09:39:59.306087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-07-15 09:39:59.306094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.306389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.306396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.306720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.306726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.307020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.307027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.307365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.307371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.307491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.307498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.307796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.307803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.308063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.308069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.308361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.308368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.308584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.308591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.308784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.308791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.309097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.309104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.309425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.309431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.309675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.309681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.310007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.310013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.310270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.310277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.310598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.310604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.310878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.310884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.311068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.311076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.311378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.311384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.311659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.311666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.311885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.311892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.312083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.312089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.312458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.312465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.312774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.312781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.313071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.313078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.313379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.313385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.313713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.313719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.314058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.314065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.314357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.314364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.314703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.314711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.315014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.315021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.315356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.315363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.315629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.315637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.315971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.315978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.316279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.316286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.316626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.316633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.317029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.317036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.317342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.317350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.317652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.317659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.317980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.317987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-07-15 09:39:59.318293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-07-15 09:39:59.318299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.318597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.318604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.318946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.318952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.319282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.319289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.319609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.319616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.320047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.320053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.320349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.320356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.320547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.320553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.320893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.320900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.321243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.321249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.321555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.321561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.321808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.321815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.322167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.322175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.322546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.322552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.322849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.322856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.323179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.323186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.323492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.323500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.323726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.323733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.324049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.324056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.324373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.324388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.324695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.324702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.325009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.325016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.325348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.325356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.325549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.325556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.325943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.325951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.326252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.326259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.326599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.326605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.326908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.326916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.327115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.327121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.327411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.327418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.327756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.327763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.328061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.328069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.328390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.328396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.328691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.328698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.329011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.329017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.329317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.329324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.329515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.329522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.329628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.329636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.330291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.330308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.330623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.330631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-07-15 09:39:59.330849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-07-15 09:39:59.330856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.331190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.331198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.331538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.331544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.331727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.331734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.332084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.332092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.332292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.332298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.332514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.332529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.332854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.332861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.333264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.333271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.333570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.333577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.333886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.333892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.334219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.334226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.334551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.334558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.334756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.334763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.335075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.335082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.335380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.335387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.335682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.335688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.336039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.336046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.336373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.336380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.336615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.336621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.336882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.336889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.337220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.337227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.337418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.337426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.337799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.337806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.338116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.338123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.338514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.338521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.338842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.338849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.339097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.339104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.339433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.339440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.339736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.339743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.340064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.340071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.340394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.340400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.340709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.340716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.341050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.341057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.341375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.341381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.341704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.341712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.342046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.342053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.342428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.342441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.342665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.342673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.342839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.342847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-07-15 09:39:59.343136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-07-15 09:39:59.343143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.343469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.343476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.343709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.343716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.344063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.344070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.344331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.344338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.344684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.344691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.345017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.345024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.345258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.345265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.345445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.345452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.345784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.345792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.346106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.346112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.346316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.346323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.346533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.346540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.346834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.346840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.347149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.347156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.347503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.347509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.347705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.347712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.347924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.347932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.348249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.348256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.348549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.348555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.348839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.348846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.349158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.349165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.349487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.349493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.349806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.349812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.350137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.350144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.350460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.350467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.350806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.350812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.351039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.351046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.351397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.351404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.351747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.351757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.352057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.352065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.352408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.352415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.352731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.352738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.353063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.353070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.353415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.353421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-07-15 09:39:59.353732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-07-15 09:39:59.353744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.353962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.353970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.354300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.354306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.354504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.354511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.354792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.354799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.355135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.355142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.355429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.355435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.355569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.355575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.355777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.355783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.356101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.356108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.356407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.356413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.356729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.356736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.357093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.357100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.357431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.357438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.357624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.357630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.357890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.357896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.358125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.358131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.358410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.358418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.358609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.358616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.358925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.358931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.359253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.359260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.359539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.359545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.359583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.359590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.359825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.359831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.360174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.360181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.360493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.360500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.360908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.360915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.361247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.361253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.361642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.361648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.361950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.361958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.362287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.362293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.362600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.362607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.362937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.362943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.363134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.363141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.363507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.363514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.363837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.363843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.364182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.364189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.364496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.364503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.364815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.364822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.365135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.365142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.365459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-07-15 09:39:59.365466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-07-15 09:39:59.365660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.365666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.365975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.365981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.366307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.366313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.366675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.366682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.366872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.366879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.367084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.367090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.367313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.367320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.367583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.367591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.367909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.367916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.368235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.368242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.368509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.368515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.368819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.368827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.369152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.369159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.369460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.369466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.369794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.369800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.370041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.370047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.370367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.370373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.370686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.370692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.371082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.371089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.371280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.371287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.371607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.371613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.371929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.371936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.372113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.372120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.372435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.372441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.372755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.372762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.373060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.373067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.373379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.373385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.373661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.373667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.373964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.373972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.374275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.374282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.374596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.374602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.374799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.374807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.375127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.375134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.375434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.375441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.375765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.375772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.375974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.375980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.376276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.376282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.376600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.376606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.376919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.376927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.377229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.377236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.377571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-07-15 09:39:59.377578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-07-15 09:39:59.377872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.377879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.378046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.378053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.378339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.378345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.378647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.378654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.378953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.378959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.379259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.379266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.379586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.379592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.379961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.379968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.380241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.380247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.380583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.380589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.380899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.380906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.381229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.381236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.381576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.381583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.381930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.381938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.382255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.382261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.382565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.382571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.382713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.382720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.382996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.383003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.383336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.383343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.383610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.383617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.383932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.383940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.384151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.384157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.384483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.384490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.384788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.384795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.385127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.385134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.385321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.385328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.385597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.385603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.385928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.385937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.386251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.386258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.386559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.386566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.386867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.386873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.387074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.387081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.387412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.387418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.387719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.387726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.387909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.387916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.388235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.388242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.388613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.388620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.388918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.388932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.389256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.389262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.389431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.389438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-07-15 09:39:59.389717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-07-15 09:39:59.389723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.389957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.389964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.390043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.390050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.390351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.390360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.390703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.390709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.391012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.391019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.391343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.391350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.391680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.391687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.391980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.391987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.392300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.392307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.392499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.392506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.392705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.392713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.393069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.393076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.393374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.393386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.393707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.393714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.394092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.394098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.394417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.394424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.394736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.394743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.395052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.395060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.395389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.395396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.395701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.395709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.396020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.396027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.396333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.396340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.396545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.396551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.396881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.396889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.397207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.397214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.397512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.397519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.397745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.397756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.398101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.398107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.398407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.398413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.398730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.398736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.399035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.399043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.399349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.399356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.399756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.399762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.400052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.400059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.400382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-07-15 09:39:59.400388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-07-15 09:39:59.400687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.400694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.401021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.401029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.401347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.401354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.401675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.401682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.401999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.402006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.402387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.402394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.402695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.402702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.403013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.403020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.403326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.403333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.403648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.403656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.403932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.403938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.404230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.404236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.404532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.404539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.404866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.404872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.405093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.405100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.405425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.405431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.405735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.405742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.406049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.406056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.406438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.406444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.406784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.406791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.407127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.407134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.407438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.407445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.407765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.407772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.408093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.408099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.408405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.408411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.408725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.408733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.408922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.408929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.409226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.409233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.409559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.409566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.409838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.409845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.410168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.410175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.410562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.410571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.410864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.410871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.411169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.411176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.411496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.411503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.411822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.411829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.412042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.412049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.412356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.412363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.412679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.412686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.412987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.412994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.413288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-07-15 09:39:59.413294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-07-15 09:39:59.413596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.413603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.413904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.413911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.414204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.414211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.414515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.414522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.414907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.414914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.415233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.415239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.415553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.415560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.415875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.415882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.416173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.416180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.416388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.416395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.416697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.416704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.417014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.417021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.417334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.417340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.417652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.417658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.417993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.417999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.418300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.418307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.418669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.418675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.418993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.419000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.419331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.419337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.419671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.419677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.420039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.420045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.420318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.420324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.420532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.420540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.420835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.420842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.421155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.421162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.421463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.421469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.421765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.421772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.422075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.422081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.422395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.422401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.422719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.422726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.423106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.423115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.423450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.423457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.423774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.423782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.423979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.423986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.424283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.424289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.424627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.424633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.424941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.424948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.425284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.425291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.425469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.425476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.425775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-07-15 09:39:59.425781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-07-15 09:39:59.426113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.426120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.426432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.426438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.426764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.426771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.427103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.427110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.427413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.427419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.427615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.427622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.427919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.427926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.428267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.428273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.428590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.428597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.428911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.428918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.429216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.429222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.429562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.429568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.429882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.429889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.430227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.430233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.430415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.430421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.430598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.430605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.430932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.430939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.431259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.431265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.431567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.431574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.431872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.431879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.432236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.432244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.432558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.432564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.432845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.432852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.433182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.433189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.433570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.433576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.433861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.433868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.434188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.434194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.434496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.434503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.434806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.434813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.435157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.435165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.435504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.435512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.435888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.435895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.436046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.436052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.436380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.436387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.436700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.436706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.437011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.437018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.437334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.437340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.437650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.437657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.437940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.437947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.438151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.438157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-07-15 09:39:59.438476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-07-15 09:39:59.438482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.438776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.438782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.439178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.439185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.439488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.439495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.439674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.439682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.439978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.439985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.440364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.440371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.440714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.440721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.441101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.441108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.441306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.441313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.441652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.441659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.441940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.441947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.442124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.442131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.442330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.442337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.442629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.442636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.442935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.442942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.443265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.443271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.443590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.443597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.443904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.443910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.444236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.444243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.444556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.444563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.444853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.444859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.445071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.445077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.445444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.445452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.445640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.445648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.445982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.445988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.446184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.446190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.446519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.446525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.446819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.446827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.447142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.447149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.447532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.447540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.447877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.447885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.448100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.448106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.448424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.448431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.448652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.448658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-07-15 09:39:59.448847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-07-15 09:39:59.448854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.449178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.449184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.449511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.449517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.449831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.449837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.450158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.450165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.450544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.450551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.450871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.450878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.451199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.451205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.451507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.451513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.451859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.451865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.452169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.452177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.452412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.452419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.452594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.452601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.452920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.452927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.453254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.453261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.453591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.453597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.453852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.453858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.454191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.454198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.454502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.454509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.454808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.454815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.454976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.454983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.455339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.455345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.455662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.455668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.455986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.455993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.456292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.456298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.456619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.456626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.456923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.456930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.457130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.457136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.457450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.457457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.457669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.457675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.457976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.457982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.458303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.458310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.458614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.458621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.458815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.458823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.459283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.459290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.459602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.459610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.459760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.459767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.460160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.460166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.460501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.460509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.460826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.460832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.461156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-07-15 09:39:59.461163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-07-15 09:39:59.461467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.461473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.461797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.461804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.462091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.462098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.462411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.462419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.462597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.462604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.462902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.462909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.463236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.463243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.463457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.463463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.463777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.463784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.463976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.463982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.464276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.464284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.464602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.464608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.464927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.464934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.465258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.465265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.465583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.465589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.465888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.465895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.466214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.466220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.466402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.466409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.466538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.466544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.466860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.466867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.467204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.467211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.467530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.467545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.467771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.467779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.468078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.468084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.468318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.468324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.468551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.468558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.468876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.468884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.469208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.469214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.469556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.469563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.469867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.469873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.470038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.470045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.470333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.470339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.470558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.470565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.470878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.470885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.471196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.471204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.471562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.471568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.471876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.471883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.472230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.472236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.472541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.472548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.472764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.472771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.473108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-07-15 09:39:59.473115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-07-15 09:39:59.473294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.473301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.473653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.473659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.473977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.473983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.474311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.474317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.474619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.474625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.474914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.474921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.475237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.475244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.475534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.475541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.475872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.475878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.476197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.476205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.476514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.476522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.476859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.476866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.477190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.477197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.477568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.477575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.477875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.477882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.478206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.478213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.478416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.478423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.478778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.478785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.479117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.479125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.479405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.479412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.479756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.479764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.479954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.479961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.480140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.480147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.480437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.480443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.480765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.480771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.480932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.480940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.481261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.481267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.481606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.481613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.481931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.481937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.482141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.482147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.482483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.482489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.482796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-07-15 09:39:59.482803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-07-15 09:39:59.483120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.483126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.483461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.483469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.483796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.483803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.484162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.484169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.484476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.484482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.484784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.484790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.485085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.485091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.485248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.485255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.485629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.485636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.486002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.486010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.486344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.486350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.486545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.486551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.486864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.486872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.487177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.487183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.487484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.487491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.487851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.487858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.488184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.488191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.488513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.488519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.488693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.488700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.489069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.489076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.489233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.489240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.489526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.489532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.489883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.489890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.490196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.490202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.490519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.490526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.490717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.490725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.491057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.491064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.491375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.491381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.491678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.491685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.491925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.491931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.492276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.492283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.492634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.492642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.492916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.492923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.493255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.493262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.493597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.493604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.493882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.493889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.494221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.494228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.494442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.494448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.494762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.494770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-07-15 09:39:59.495058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-07-15 09:39:59.495064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.495368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.495374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.495633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.495641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.495943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.495949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.496165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.496171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.496378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.496384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.496679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.496686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.497028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.497034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.497320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.497326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.497524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.497530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.497848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.497855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.498184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.498190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.498487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.498494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.498864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.498871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.499180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.499186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.499522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.499529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.499738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.499744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.499971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.499978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.500275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.500281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.500586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.500592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.500834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.500840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.501132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.501139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.501468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.501474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.501803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.501810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.502016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.502023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.502349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.502355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.502612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.502618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.502805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.502813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.503053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.503059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.503275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.503282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.503588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-07-15 09:39:59.503595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-07-15 09:39:59.503906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.503912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.504232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.504239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.504593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.504600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.504921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.504929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.505176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.505182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.505397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.505404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.505768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.505775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.506179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.506185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.506426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.506433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.506779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.506785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.507090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.507096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.507296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.507303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.507514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.507520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.507709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.507715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.507945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.507952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.508280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.508286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.508603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.508609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.508945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.508952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.509268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.509274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.509596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.509603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.509922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.509930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.510253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.510259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.510448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.510454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.510684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.510690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.511005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.511011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.511318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.511324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.511581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.511588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.511869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.511875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.512191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.512198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.512451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.512457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.512721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.512728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.513136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.513143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.513344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.513351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.513721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.513728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-07-15 09:39:59.513802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-07-15 09:39:59.513809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.514145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.514151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.514369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.514375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.514568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.514574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.514791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.514799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.515081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.515088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.515400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.515407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.515613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.515620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.515910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.515917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.516257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.516263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.516569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.516576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.516866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.516872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.517253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.517259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.517467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.517473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.517761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.517768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.518092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.518098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.518436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.518443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.518773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.518780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.519163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.519170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.519544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.519550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.519838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.519844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.520167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.520174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.520501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.520507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.520613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.520619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.520854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.520860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.521051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.521058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.521366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.521372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.521688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.521696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.522022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.522028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.522343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.522349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.522531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.522537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.522722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.522729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.523079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.523087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.523297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.523304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.523583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.523590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.523909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.523915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.524139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.524145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.524506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.524512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-07-15 09:39:59.524724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-07-15 09:39:59.524731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.524804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.524811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.525114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.525121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.525427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.525433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.525724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.525731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.526022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.526029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.526313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.526321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.526633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.526639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.526941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.526948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.527272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.527278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.527509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.527515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.527855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.527862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.528226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.528234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.528563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.528570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.528892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.528898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.529113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.529119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.529482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.529489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.529702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.529709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.529916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.529923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.530299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.530306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.530616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.530631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.530951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.530957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.531286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.531292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.531601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.531608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.531822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.531829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.532172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.532178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.532550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.532556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.532891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.532898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.533204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.533211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.533424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.533430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.533739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.533745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.534096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.534102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.534412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.534418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-07-15 09:39:59.534749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-07-15 09:39:59.534757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.535084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.535090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.535401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.535407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.535765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.535772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.535994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.536001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.536331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.536338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.536638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.536645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.536912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.536919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.537247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.537262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.537582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.537589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.537941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.537948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.538244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.538251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.538559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.538566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.538900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.538917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.539240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.539247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.539550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.539557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.539796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.539803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.540022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.540028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.540237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.540244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.540599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.540605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.540840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.540847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.541173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.541179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.541524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.541531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.541885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.541892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.542225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.542232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.542547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.542553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.542730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.542737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.543084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.543091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.543415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.543421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.543699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.543705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.543884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.543891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.544306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.544312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.544621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.544636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.544900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.544907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.545249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.545255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.545454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.545461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.545620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.545627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.545930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.545937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.546188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.546194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.546486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.546492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.546797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-07-15 09:39:59.546804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-07-15 09:39:59.547038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-07-15 09:39:59.547044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-07-15 09:39:59.547359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-07-15 09:39:59.547366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-07-15 09:39:59.547698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-07-15 09:39:59.547704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-07-15 09:39:59.547930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-07-15 09:39:59.547937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-07-15 09:39:59.548270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-07-15 09:39:59.548276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-07-15 09:39:59.548595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-07-15 09:39:59.548601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-07-15 09:39:59.548929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-07-15 09:39:59.548935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-07-15 09:39:59.549272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-07-15 09:39:59.549278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-07-15 09:39:59.549509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-07-15 09:39:59.549516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-07-15 09:39:59.549737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-07-15 09:39:59.549743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-07-15 09:39:59.550071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-07-15 09:39:59.550077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-07-15 09:39:59.550469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-07-15 09:39:59.550476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-07-15 09:39:59.550658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-07-15 09:39:59.550667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-07-15 09:39:59.551006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-07-15 09:39:59.551013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-07-15 09:39:59.551368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-07-15 09:39:59.551374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-07-15 09:39:59.551437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-07-15 09:39:59.551444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.670 qpair failed and we were unable to recover it. 00:31:12.670 [2024-07-15 09:39:59.551744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.670 [2024-07-15 09:39:59.551754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.670 qpair failed and we were unable to recover it. 00:31:12.670 [2024-07-15 09:39:59.552086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.670 [2024-07-15 09:39:59.552093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.670 qpair failed and we were unable to recover it. 00:31:12.670 [2024-07-15 09:39:59.552331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.670 [2024-07-15 09:39:59.552338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.670 qpair failed and we were unable to recover it. 00:31:12.670 [2024-07-15 09:39:59.552657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.670 [2024-07-15 09:39:59.552664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.670 qpair failed and we were unable to recover it. 00:31:12.670 [2024-07-15 09:39:59.552874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.670 [2024-07-15 09:39:59.552881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.670 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.553293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.553301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.553602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.553609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.553801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.553809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.554152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.554159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.554478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.554485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.554776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.554783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.555158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.555165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.555374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.555381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.555796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.555802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.556108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.556115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.556368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.556375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.556585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.556591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.556929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.556936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.557140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.557146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.557384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.557390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.557705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.557711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.558011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.558018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.558409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.558416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.558759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.558766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.559118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.559125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.559356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.559363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.559575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.559581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.559876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.559882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.560227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.560233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.560564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.560570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.560759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.560766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.561088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.561095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.561511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.561518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.561809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.561816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.562062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.562068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.562393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.562408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.562476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.562484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.562777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.562783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.563107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.563113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.563465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.563471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.563642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.563648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.563983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.563989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.564327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.564334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.564638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.564646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.564959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.564966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.565307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.565314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.565603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.565610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.565937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.565943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.566267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.566273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.566578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.566584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.566937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.566944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.567266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.567272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.567576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.567583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.567917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.567923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.568233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.568240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.568546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.568553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.568748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.568757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.569074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.569080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.569401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.569407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.569790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.569797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.570081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.570087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.570328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.570334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.570713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.570719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.570827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.570834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.571137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.571144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.571318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.571324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.571537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.571544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.571831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.571839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.572231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.572238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.572540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.572547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.572886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.572892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.573105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.573112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.573448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.573456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.573775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.573783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.573982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.573989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.574187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.671 [2024-07-15 09:39:59.574193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.671 qpair failed and we were unable to recover it. 00:31:12.671 [2024-07-15 09:39:59.574517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.574526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.574771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.574778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.574976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.574982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.575323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.575329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.575674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.575681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.576010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.576017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.576197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.576204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.576571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.576578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.576906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.576913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.577188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.577194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.577482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.577489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.577710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.577717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.578047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.578055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.578407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.578414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.578740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.578747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.579049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.579056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.579370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.579377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.579651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.579658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.579835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.579842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.580151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.580157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.580380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.580387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.580703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.580709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.580898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.580904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.581288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.581295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.581608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.581615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.581931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.581938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.582154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.582161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.582481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.582488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.582753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.582760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.583110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.583116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.583460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.583467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.583788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.583795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.583985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.583992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.584309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.584316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.584669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.584677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.585000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.585007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.585334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.585349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.585699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.585707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.585990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.585997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.586182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.586188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.586469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.586478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.586665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.586672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.587100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.587108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.587439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.587446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.587764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.587770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.588116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.588123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.588383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.588390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.588612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.588618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.589057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.589064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.589437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.589443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.589782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.589789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.590012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.590020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.590322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.590328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.590620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.590626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.590910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.590916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.591139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.591145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.591434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.591441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.591780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.591788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.591983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.591989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.592261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.592268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.592604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.592610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.592965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.592972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.593243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.593250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.593534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.593540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.593846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.593852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.594140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.594146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.594464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.594477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.594742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.594748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.594944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.594950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.595297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.595303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.595619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.595626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.672 [2024-07-15 09:39:59.595822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.672 [2024-07-15 09:39:59.595830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.672 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.596165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.596172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.596507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.596513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.596863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.596870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.597203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.597210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.597547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.597554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.597840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.597846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.598177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.598183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.598497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.598511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.598869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.598877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.599183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.599190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.599531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.599538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.599851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.599857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.600180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.600187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.600489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.600495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.600797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.600804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.601100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.601107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.601404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.601411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.601607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.601613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.601832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.601839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.602202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.602209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.602515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.602522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.602842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.602849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.603081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.603088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.603416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.603422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.603647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.603654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.603847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.603855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.604150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.604156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.604489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.604501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.604820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.604827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.605172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.605178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.605526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.605534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.605875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.605882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.606214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.606220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.606561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.606567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.606774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.606780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.606946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.606953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.607242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.607248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.607580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.607587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.607960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.607967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.608263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.608270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.608602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.608608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.608952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.608958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.609280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.609287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.609591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.609597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.609903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.609909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.610101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.610107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.610466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.610472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.610792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.610798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.611139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.611153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.611490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.611496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.611798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.611810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.612046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.612053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.612376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.612382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.612723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.612730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.613017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.613024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.613330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.613336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.613616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.613623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.613955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.613962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.614262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.614269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.614572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.614579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.614868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.614874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.615170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.615176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.615429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.615436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.615660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.615666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.673 [2024-07-15 09:39:59.615959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.673 [2024-07-15 09:39:59.615966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.673 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.616183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.616189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.616503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.616509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.616715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.616721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.617040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.617047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.617344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.617350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.617662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.617668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.617935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.617942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.618271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.618279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.618613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.618620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.618934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.618940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.619258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.619265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.619564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.619570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.619891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.619898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.620225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.620231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.620425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.620432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.620766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.620773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.621085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.621091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.621391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.621397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.621583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.621589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.621884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.621891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.622220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.622227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.622519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.622525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.622829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.622835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.623160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.623168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.623471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.623478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.623668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.623675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.623990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.623996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.624326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.624332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.624648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.624654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.624965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.624971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.625238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.625244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.625538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.625545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.625830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.625836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.626121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.626128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.626294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.626301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.626646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.626652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.626932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.626939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.627260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.627266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.627644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.627650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.628004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.628011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.628165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.628172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.628479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.628486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.628795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.628802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.629097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.629103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.629402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.629408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.629473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.629480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.629771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.629777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.629968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.629974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.630302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.630309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.630693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.630700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.631022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.631030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.631216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.631223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.631289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.631295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.631598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.631605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.631949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.631956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.632267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.632273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.632444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.632450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.632748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.632756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.633060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.633066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.633396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.633402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.633723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.633729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.634026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.634033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.634213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.634220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.634552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.634560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.634794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.634801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.635105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.635112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.635404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.635410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.635742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.635749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.636125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.636131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.636436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.636443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.636744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.636750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.637113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.637120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.637408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.637415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.674 [2024-07-15 09:39:59.637737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.674 [2024-07-15 09:39:59.637743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.674 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.638049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.638057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.638330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.638336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.638644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.638651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.638986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.638993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.639373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.639380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.639683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.639690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.639974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.639980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.640296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.640302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.640609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.640616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.640921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.640928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.641244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.641251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.641551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.641557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.641862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.641869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.642075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.642081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.642484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.642491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.642798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.642805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.643136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.643151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.643470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.643476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.643857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.643864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.644216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.644222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.644524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.644530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.644836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.644843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.645067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.645073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.645372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.645379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.645681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.645687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.646036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.646043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.646348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.646354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.646537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.646544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.646846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.646853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.647212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.647220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.647413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.647420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.647731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.647738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.648073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.648080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.648459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.648466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.648776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.648784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.648909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.648916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.649187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.649194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.649411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.649417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.649740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.649747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.650045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.650052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.650115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.650121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.650416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.650422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.650725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.650731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.651053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.651060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.651221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.651227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.651418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.651425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.651714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.651720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.652018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.652025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.652326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.652332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.652687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.652693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.652991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.652998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.653322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.653328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.653547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.653553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.653947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.653953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.654255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.654262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.654567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.654573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.654860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.654867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.655149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.655155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.655311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.655318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.655692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.655700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.656019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.656026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.656320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.656327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.656621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.656628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.656927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.656934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.657242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.657249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.657549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.675 [2024-07-15 09:39:59.657555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.675 qpair failed and we were unable to recover it. 00:31:12.675 [2024-07-15 09:39:59.657858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.657865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.658185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.658191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.658503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.658509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.658819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.658826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.659062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.659068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.659386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.659393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.659774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.659781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.660074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.660081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.660403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.660409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.660787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.660793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.661133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.661139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.661478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.661484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.661684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.661690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.661994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.662002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.662346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.662354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.662589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.662597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.662917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.662925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.663250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.663257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.663444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.663451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.663717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.663723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.664006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.664013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.664332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.664338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.664636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.664643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.664945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.664951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.665240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.665247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.665577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.665583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.665885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.665892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.666094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.666101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.666419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.666426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.666769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.666775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.666989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.666997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.667335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.667341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.667648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.667655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.667988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.667995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.668296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.668302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.668601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.668608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.668919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.668926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.669145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.669152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.669303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.669309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.669598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.669604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.669922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.669928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.670265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.670272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.670457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.670463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.670807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.670813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.671156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.671162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.671465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.671471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.671670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.671677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.672020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.672026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.672260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.672267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.672474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.672481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.672807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.672814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.673135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.673141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.673437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.673444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.673790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.673797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.674094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.674101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.674357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.674363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.674661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.674667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.675001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.675008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.675324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.675331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.675612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.675618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.675818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.675824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.676112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.676118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.676429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.676435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.676742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.676759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.676986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.676992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.677301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.677308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.677449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.677456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.677770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.677777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.678102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.678110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.678452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.678459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.678760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.678769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.679106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.679113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.676 [2024-07-15 09:39:59.679427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.676 [2024-07-15 09:39:59.679433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.676 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.679605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.679612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.679891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.679898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.680225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.680232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.680426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.680432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.680707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.680713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.681103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.681110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.681434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.681440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.681829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.681835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.682128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.682135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.682324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.682330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.682549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.682555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.682880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.682887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.683210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.683217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.683403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.683410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.683738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.683746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.684087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.684094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.684465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.684471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.684787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.684794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.685083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.685090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.685411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.685418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.685797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.685803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.686144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.686150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.686471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.686477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.686777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.686783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.687177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.687183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.687482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.687489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.687674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.687680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.687870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.687878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.688242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.688248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.688552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.688567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.688755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.688761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.689066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.689072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.689364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.689370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.689661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.689668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.689985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.689992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.690333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.690339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.690633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.690640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.690931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.690939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.691259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.691266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.691581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.691587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.691986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.691993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.692186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.692192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.692516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.692522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.692846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.692853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.693157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.693164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.693383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.693389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.693551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.693558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.693850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.693856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.694167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.694174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.694502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.694509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.694651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.694658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.694854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.694861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.695197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.695203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.695483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.695489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.695814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.695820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.696153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.696160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.696473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.696480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.696670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.696676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.697086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.697092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.697374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.697381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.697686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.697693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.698002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.698010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.698333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.698339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.698659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.698665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.698956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.698963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.699290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.699297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.699651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.699657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.699923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.699930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.700255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.700261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.700563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.700569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.700766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.700773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.677 qpair failed and we were unable to recover it. 00:31:12.677 [2024-07-15 09:39:59.701138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.677 [2024-07-15 09:39:59.701145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.701455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.701461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.701756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.701763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.702067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.702074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.702377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.702383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.702682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.702688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.702892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.702900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.703230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.703237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.703553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.703560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.703871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.703879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.704201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.704208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.704525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.704532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.704836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.704842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.705059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.705065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.705262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.705269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.705606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.705612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.705917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.705924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.706245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.706251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.706521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.706527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.706844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.706851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.707143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.707149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.707445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.707452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.707754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.707761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.708109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.708116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.708432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.708438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.708635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.708642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.708980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.708987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.709305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.709312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.709612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.709619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.709849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.709855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.710164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.710170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.710496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.710502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.710821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.710827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.711152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.711158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.711563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.711570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.711892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.711898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.712072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.712078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.712527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.712533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.712847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.712855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.713166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.713173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.713491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.713497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.713803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.713809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.714126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.714133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.714433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.714439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.714768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.714775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.715101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.715108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.715405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.715415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.715736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.715742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.716115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.716122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.716437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.716443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.716746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.716757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.717127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.717134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.717480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.717492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.717787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.717793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.718009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.718016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.718340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.718346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.718667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.718673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.718869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.718876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.719098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-07-15 09:39:59.719105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-07-15 09:39:59.719430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.719437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.719771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.719778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.720099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.720106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.720421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.720436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.720748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.720761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.721048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.721055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.721265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.721272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.721590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.721597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.721906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.721913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.722248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.722254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.722557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.722563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.722872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.722879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.723201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.723207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.723512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.723519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.723853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.723859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.724071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.724078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.724418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.724424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.724743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.724749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.725057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.725064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.725444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.725450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.725621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.725628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.725931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.725938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.726329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.726336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.726650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.726657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.726976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.726983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.727248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.727254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.727415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.727422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.727628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.727637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.727978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.727984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.728294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.728300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.728498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.728505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.728830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.728837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.729156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.729162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.729454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.729460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.729777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.729784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.729964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.729970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.730302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.730308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.730629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.730635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.730858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.730865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.731170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.731176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.731480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.731486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.731821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.731827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.732153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.732160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.732501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.732507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.732803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.732810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.733038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.733045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.733373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.733380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.733681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.733687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.733991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.733997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.734358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.734364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.734666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.734673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.734952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.734959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.735278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.735285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.735596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.735603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.735950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.735956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.736252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.736258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.736576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.736582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.736900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.736907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.737250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.737256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.737576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.737583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.737925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.737932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.738249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.738256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.738567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.738573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.738875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.738881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.739086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.739093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.739370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.739376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.739696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.739702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.740009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.740018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-07-15 09:39:59.740332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-07-15 09:39:59.740338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.740592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.740598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.740813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.740820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.741134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.741140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.741446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.741453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.741785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.741792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.742105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.742112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.742341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.742347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.742562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.742569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.742930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.742936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.743261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.743267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.743605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.743612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.743918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.743924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.744239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.744246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.744587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.744594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.744894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.744901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.745229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.745235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.745387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.745394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.745673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.745686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.746043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.746049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.746427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.746434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.746632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.746638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.746940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.746947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.747257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.747264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.747643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.747649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.747994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.748001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.748192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.748199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.748503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.748510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.748816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.748823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.749009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.749015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.749233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.749240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.749444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.749450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.749710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.749716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.750015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.750023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.750365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.750372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.750673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.750679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.750961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.750967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.751189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.751195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.751545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.751552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.751894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.751901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.752113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.752119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.752465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.752471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.752698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.752704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.753001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.753008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.753334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.753340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.753639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.753645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.753837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.753844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.754193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.754199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.754514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.754521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.754834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.754841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.755199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.755206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.755539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.755547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.755887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.755894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.756238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.756244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.756566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.756572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.756870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.756877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.757185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.757192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.757539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.757545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.757746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.757758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.758074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.758080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.758406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.758412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.758725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.758731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.759031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.759038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.759343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.759350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.759666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.759673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.759990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.759997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-07-15 09:39:59.760310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-07-15 09:39:59.760316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.760504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.760510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.760716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.760723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.761044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.761051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.761240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.761247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.761431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.761438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.761771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.761778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.762082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.762088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.762409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.762415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.762734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.762740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.762968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.762975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.763290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.763296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.763567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.763573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.763744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.763756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.764065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.764072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.764373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.764379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.764682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.764688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.764974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.764981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.765261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.765267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.765602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.765609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.765791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.765798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.766006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.766013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.766312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.766319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.766535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.766542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.766830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.766836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.767118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.767125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.767441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.767447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.767828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.767834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.768108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.768114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.768432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.768438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.768754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.768761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.769060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.769066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.769368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.769375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.769659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.769666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.770045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.770051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.770343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.770350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.770673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.770680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.770902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.770909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.771258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.771265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.771570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.771578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.771874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.771880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.772196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.772204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.772495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.772503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.772803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.772809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.773125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.773132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.773324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.773332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.773662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.773669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.773953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.773960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.774159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.774166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.774462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.774469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.774789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.774797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.775115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.775122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.775276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.775284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.775669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.775679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.776046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.776054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.776371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.776378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.776688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.776695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.777009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.777016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.777337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.777343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.777629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.777635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.777955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.777962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.778266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.778273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.778574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.778580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.778888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.778895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.779220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.779226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.779534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.779541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.779879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.779886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.780206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.780213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.780524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.780531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.780717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.780723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.781022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.781029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.781373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-07-15 09:39:59.781380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-07-15 09:39:59.782209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.782226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.782523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.782532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.782872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.782880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.783200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.783207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.783509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.783516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.783824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.783830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.784145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.784152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.784447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.784454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.784803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.784809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.785117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.785123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.785431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.785438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.785744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.785762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.786063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.786070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.786373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.786379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.786683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.786690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.787016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.787024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.787361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.787368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.787676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.787683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.787891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.787898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.788096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.788103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.788405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.788411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.788686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.788694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.788914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.788922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.789239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.789245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.789579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.789586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.789912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.789919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.790234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.790241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.790559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.790566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.790903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.790910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.791243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.791249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.791552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.791559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.791841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.791848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.792141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.792155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.792494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.792500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.792805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.792811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.793136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.793143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.793445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.793452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.793764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.793771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.794070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.794076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.794392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.794398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.794703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.794710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.794986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.794992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.795294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.795301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.795495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.795502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.795730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.795737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.796087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.796094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.796414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.796421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.796721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.796728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.797142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.797149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.797460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.797467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.797788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.797795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.798038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.798045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.798359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.798366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.798662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.798669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.798998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.799005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.799385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.799391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.799572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.799579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.799803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.799810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.800088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.800094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.800421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.800427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.800608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.800615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.800812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.800820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.801110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.801118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.801447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.801453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.801762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.801769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.801971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.801977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.802256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.802262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.802575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.802582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-07-15 09:39:59.802775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-07-15 09:39:59.802782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.803103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.803109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.803405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.803412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.803710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.803716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.803948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.803954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.804271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.804277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.804577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.804584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.804971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.804979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.805346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.805352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.805653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.805660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.805998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.806005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.806299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.806306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.806463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.806471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.806810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.806817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.807148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.807155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.807461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.807467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.807757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.807765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.808001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.808007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.808206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.808213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.808552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.808558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.808870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.808877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.809197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.809204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.809584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.809591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.809927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.809934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.810265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.810271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.810569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.810577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.810889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.810897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.811281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.811288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.811595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.811601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.811944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.811951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.812233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.812239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.812574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.812580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.812879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.812892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.813213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.813221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.813516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.813523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.813886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.813893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.814115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.814121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.814459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.814466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.814658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.814665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.815019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.815026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.815246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.815253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.815545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.815551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.815872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.815878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.816198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.816204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.816501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.816508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.816880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.816887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.817196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.817203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.817519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.817525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.817912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.817918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.818228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.818234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.818555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.818562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.818888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.818895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.819112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.819119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.819301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.819308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.819632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.819639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.819861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.819868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.820272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.820278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.820577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.820584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.820901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.820907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.821215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.821222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.821544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.821555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.821856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.821863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.822182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.822188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-07-15 09:39:59.822569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-07-15 09:39:59.822576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.822909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.822916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.823252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.823258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.823552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.823559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.823853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.823860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.824168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.824175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.824516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.824522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.824824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.824832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.825150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.825157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.825536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.825542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.825780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.825786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.826110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.826117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.826448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.826454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.826763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.826770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.826985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.826991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.827267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.827274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.827461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.827468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.827665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.827672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.827973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.827980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.828308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.828314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.828507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.828514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.828851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.828858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.829084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.829091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.829369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.829376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.829693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.829700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.830002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.830009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.830353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.830359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.830650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.830657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.830846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.830853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.831191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.831198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.831361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.831368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.831701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.831707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.832020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.832028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.832344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.832351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.832658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.832665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.832948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.832955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.833270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.833277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.833592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.833600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.833886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.833893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.834214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.834220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.834515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.834523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.834823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.834829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.835177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.835183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.835486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.835492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.835808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.835815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.836028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.836034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.836380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.836387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.836585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.836591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.836934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.836940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.837259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.837265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.837539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.837546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.837891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.837898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.838198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.838205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.838506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.838512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.838668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.838675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.838926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.838933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.839333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.839340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.839504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.839511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.839841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.839847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.840179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.840186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.840518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.840524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.840827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.840833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.841036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.841043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.841245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.841252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.841622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.841628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.841930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.841937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.842256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.842263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.842565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.842571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-07-15 09:39:59.842891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-07-15 09:39:59.842898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.843253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.843260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.843597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.843604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.843920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.843927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.844243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.844250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.844575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.844581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.844778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.844784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.845133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.845140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.845437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.845444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.845760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.845768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.845996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.846003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.846227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.846234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.846549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.846556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.846750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.846760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.846998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.847004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.847229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.847235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.847511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.847517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.847836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.847842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.848165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.848172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.848467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.848473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.848774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.848781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.849186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.849192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.849497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.849504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.849799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.849805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.850042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.850049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.850401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.850407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.850707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.850714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.851106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.851112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.851290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.851297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.851620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.851627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.851807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.851815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.852135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.852141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.852463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.852469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.852764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.852771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.852936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.852943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.853132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.853138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.853335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.853342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.853721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.853728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.853950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.853957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.854146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.854153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.854461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.854468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.854764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.854771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.855086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.855093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.855424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.855430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.855770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.855777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.856094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.856100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.856402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.856408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.856797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.856804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.857100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.857107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.857270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.857287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.857636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-07-15 09:39:59.857642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-07-15 09:39:59.857929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-07-15 09:39:59.857936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-07-15 09:39:59.858255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-07-15 09:39:59.858262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-07-15 09:39:59.858577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-07-15 09:39:59.858584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-07-15 09:39:59.858921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-07-15 09:39:59.858929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-07-15 09:39:59.859284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-07-15 09:39:59.859291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-07-15 09:39:59.859600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-07-15 09:39:59.859606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-07-15 09:39:59.859902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-07-15 09:39:59.859909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-07-15 09:39:59.860229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-07-15 09:39:59.860236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-07-15 09:39:59.860535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-07-15 09:39:59.860541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-07-15 09:39:59.860843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-07-15 09:39:59.860849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-07-15 09:39:59.861027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-07-15 09:39:59.861034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-07-15 09:39:59.861360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-07-15 09:39:59.861367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-07-15 09:39:59.861514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-07-15 09:39:59.861521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-07-15 09:39:59.861846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-07-15 09:39:59.861853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-07-15 09:39:59.862016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-07-15 09:39:59.862023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.862317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.862324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.862644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.862650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.862792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.862799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.863101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.863108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.863480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.863486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.863790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.863797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.864190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.864197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.864513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.864519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.864815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.864822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.865139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.865145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.865500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.865507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.865919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.865926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.866226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.866232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.866609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.866615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.866930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.866937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.867258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.867264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.867562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.867569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.867758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.867765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.868071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.868077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.868381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.961 [2024-07-15 09:39:59.868388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.961 qpair failed and we were unable to recover it. 00:31:12.961 [2024-07-15 09:39:59.868721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.868728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.869023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.869030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.869234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.869241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.869527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.869535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.869842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.869849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.870145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.870152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.870497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.870505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.870806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.870813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.871133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.871140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.871442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.871449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.871593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.871601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.871749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.871763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.872045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.872051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.872380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.872387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.872701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.872707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.873043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.873050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.873369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.873375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.873714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.873721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.874023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.874030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.874350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.874356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.874658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.874664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.874959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.874966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.875299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.875305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.875627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.875633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.875935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.875941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.876277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.876283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.876587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.876594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.876892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.876899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.877209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.877216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.877556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.877563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.877757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.877764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.878103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.878116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.878436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.878442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.878754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.878761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.879042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.879048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.879189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.879196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.879561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.879567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.879856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.879863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.880233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.880239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.880397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.880404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.962 [2024-07-15 09:39:59.880784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.962 [2024-07-15 09:39:59.880791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.962 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.881021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.881027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.881347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.881353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.881654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.881664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.881994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.882001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.882301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.882309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.882612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.882618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.882918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.882926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.883248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.883254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.883451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.883457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.883769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.883775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.883985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.883992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.884276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.884290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.884471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.884477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.884684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.884691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.884930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.884937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.885267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.885273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.885599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.885606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.885956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.885962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.886216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.886222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.886504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.886510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.886745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.886760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.887100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.887106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.887303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.887309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.887669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.887675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.887987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.887995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.888200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.888206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.888517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.888524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.888730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.888736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.888907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.888914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.889267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.889273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.889575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.889581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.889898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.889904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.890216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.890223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.890537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.890544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.890848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.890854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.891175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.891181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.891486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.891502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.891816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.891823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.892051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.892058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.963 [2024-07-15 09:39:59.892259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.963 [2024-07-15 09:39:59.892265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.963 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.892559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.892565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.892886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.892893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.893219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.893227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.893534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.893541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.893861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.893868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.894100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.894106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.894456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.894462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.894756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.894762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.895081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.895087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.895342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.895349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.895671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.895677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.895959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.895966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.896161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.896167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.896503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.896509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.896824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.896831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.896969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.896976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.897369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.897375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.897669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.897677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.898037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.898044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.898344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.898351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.898695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.898701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.899103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.899110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.899439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.899446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.899776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.899783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.900091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.900097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.900428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.900435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.900735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.900741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.901125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.901131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.901471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.901478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.901784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.901791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.902103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.902110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.902404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.902411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.902716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.902722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.903055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.903062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.903409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.903415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.903731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.903737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.904040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.904047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.904348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.904354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.904624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.904631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.904814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.904820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.964 [2024-07-15 09:39:59.905104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.964 [2024-07-15 09:39:59.905110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.964 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.905415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.905421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.905768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.905777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.906136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.906142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.906424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.906431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.906746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.906756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.907049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.907056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.907358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.907364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.907668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.907674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.907954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.907961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.908291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.908298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.908650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.908657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.908939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.908946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.909121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.909129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.909487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.909493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.909841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.909848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.910146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.910152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.910470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.910476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.910862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.910869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.911207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.911213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.911439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.911445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.911774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.911781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.912112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.912118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.912253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.912259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.912452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.912458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.912844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.912850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.913027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.913034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.913218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.913225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.913561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.913567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.913940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.913947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.914268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.914274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.914585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.914599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.914916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.914922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.915236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.915243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.965 [2024-07-15 09:39:59.915584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.965 [2024-07-15 09:39:59.915590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.965 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.915903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.915910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.916284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.916290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.916501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.916507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.916711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.916718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.917068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.917075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.917366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.917373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.917711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.917717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.918032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.918040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.918355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.918361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.918652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.918659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.918973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.918980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.919360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.919366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.919651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.919657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.919975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.919981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.920286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.920293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.920610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.920617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.920921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.920928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.921228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.921234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.921555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.921562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.921888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.921894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.922252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.922259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.922599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.922606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.922873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.922879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.923201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.923208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.923531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.923538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.923838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.923846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.924015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.924021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.924348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.924355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.924681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.924688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.925004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.925011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.925319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.925325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.925630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.925638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.925929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.925936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.926254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.926261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.926565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.926572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.926864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.926870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.927181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.927187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.927483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.927495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.927836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.927843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.928035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.966 [2024-07-15 09:39:59.928041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.966 qpair failed and we were unable to recover it. 00:31:12.966 [2024-07-15 09:39:59.928336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.928343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.928661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.928667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.928964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.928971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.929254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.929260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.929407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.929414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.929634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.929641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.929937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.929944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.930229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.930238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.930527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.930534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.930747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.930756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.931052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.931059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.931384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.931390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.931691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.931697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.932075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.932081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.932378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.932385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.932686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.932693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.933020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.933026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.933218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.933225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.933548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.933556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.933867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.933873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.934266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.934273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.934612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.934618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.934903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.934909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.935226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.935232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.935546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.935553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.935835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.935841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.936169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.936175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.936554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.936560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.936911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.936918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.937250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.937256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.937548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.937554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.937877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.937883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.938194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.938201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.938521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.938527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.938910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.938917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.939324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.939330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.939660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.939667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.939893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.939900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.940084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.940091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.940396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.940403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.940703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.967 [2024-07-15 09:39:59.940710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.967 qpair failed and we were unable to recover it. 00:31:12.967 [2024-07-15 09:39:59.941008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.941015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.941307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.941313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.941635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.941642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.941935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.941942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.942259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.942266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.942614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.942621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.942924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.942934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.943142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.943150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.943464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.943471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.943767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.943774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.944096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.944103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.944404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.944412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.944724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.944730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.945035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.945042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.945357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.945363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.945709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.945715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.946029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.946036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.946314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.946320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.946497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.946504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.946776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.946782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.947139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.947145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.947447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.947454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.947767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.947773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.948146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.948152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.948476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.948483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.948778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.948785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.949142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.949148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.949461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.949467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.949783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.949790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.949973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.949980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.950302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.950309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.950626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.950633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.950827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.950834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.951151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.951158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.951458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.951464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.951788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.951795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.952125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.952131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.952436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.952443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.952787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.952793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.953099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.953106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.953427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.968 [2024-07-15 09:39:59.953433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.968 qpair failed and we were unable to recover it. 00:31:12.968 [2024-07-15 09:39:59.953561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.953567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.953982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.953989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.954280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.954287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.954485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.954491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.954789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.954796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.955010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.955019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.955335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.955341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.955509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.955516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.955800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.955807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.956086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.956092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.956399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.956405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.956721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.956727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.956930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.956937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.957264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.957270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.957572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.957578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.957918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.957925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.958231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.958238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.958550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.958557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.958883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.958889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.959286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.959292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.959628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.959634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.959819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.959826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.960250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.960256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.960607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.960614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.960932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.960939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.961262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.961269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.961584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.961590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.961863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.961870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.962192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.962199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.962491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.962497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.962588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.962594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.962863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.962870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.963201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.963207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.963372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.963379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.963709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.963715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.964022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.964028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.964331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.964337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.964690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.964696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.965077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.965084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.969 qpair failed and we were unable to recover it. 00:31:12.969 [2024-07-15 09:39:59.965426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.969 [2024-07-15 09:39:59.965432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.965576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.965582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.965886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.965892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.966169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.966176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.966483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.966490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.966808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.966815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.967130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.967138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.967333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.967340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.967547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.967554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.967890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.967897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.968257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.968263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.968582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.968588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.968884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.968890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.969222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.969229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.969534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.969541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.969870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.969876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.970255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.970262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.970570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.970577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.970876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.970883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.971187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.971193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.971501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.971508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.971804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.971811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.972134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.972140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.972443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.972449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.972764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.972770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.973067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.973074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.973390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.973396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.973700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.973707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.974032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.974038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.974242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.974248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.974555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.974562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.974765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.974773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.975080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.975087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.975390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.975398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.975698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.975704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.975999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.976006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.976329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.976335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.970 [2024-07-15 09:39:59.976484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.970 [2024-07-15 09:39:59.976492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.970 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.976875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.976882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.977188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.977195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.977497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.977503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.977865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.977872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.978159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.978165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.978389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.978396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.978713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.978719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.978862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.978870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.979149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.979156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.979367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.979374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.979771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.979777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.980084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.980091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.980390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.980398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.980720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.980728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.981032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.981040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.981357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.981366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.981686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.981694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.981876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.981884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.982175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.982182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.982526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.982535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.982826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.982834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.983055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.983063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.983378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.983386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.983726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.983733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.984030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.984039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.984359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.984367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.984665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.984673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.984955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.984962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.985260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.985268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.985619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.985626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.985920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.985928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.986218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.986226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.986571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.986579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.986896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.986904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.987141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.987148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.987425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.987434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.987760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.987769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.988104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.988113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.988441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.988450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.988799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.988807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.971 qpair failed and we were unable to recover it. 00:31:12.971 [2024-07-15 09:39:59.989133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.971 [2024-07-15 09:39:59.989141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.989452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.989460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.989764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.989773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.990109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.990117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.990418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.990425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.990727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.990736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.991055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.991063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.991382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.991391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.991734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.991742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.992052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.992061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.992379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.992387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.992725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.992734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.993039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.993048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.993335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.993343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.993669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.993676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.993863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.993871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.994200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.994208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.994506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.994514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.994828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.994836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.995155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.995162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.995469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.995478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.995793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.995800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.996146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.996154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.996454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.996461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.996800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.996808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.997136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.997144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.997444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.997452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.997768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.997776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.998011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.998019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.998350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.998357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.998658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.998667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.998983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.998991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.999333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.999341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.999527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.999535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:39:59.999766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:39:59.999774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:40:00.000083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:40:00.000092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:40:00.000393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:40:00.000402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:40:00.000713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:40:00.000721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:40:00.001046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:40:00.001055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:40:00.001389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:40:00.001397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:40:00.001788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.972 [2024-07-15 09:40:00.001796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.972 qpair failed and we were unable to recover it. 00:31:12.972 [2024-07-15 09:40:00.002041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.002048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.002335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.002342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.002689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.002696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.003085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.003093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.003738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.003750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.003856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.003864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.004182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.004191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.004568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.004577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.004920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.004928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.005125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.005133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.005498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.005506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.005615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.005624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.005902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.005910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.006249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.006257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.006578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.006587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.006908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.006916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.007228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.007237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.007611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.007619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.007854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.007863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.008077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.008086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.008378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.008387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.008611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.008621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.008916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.008925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.009257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.009265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.009473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.009482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.009777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.009785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.010031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.010040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.010347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.010356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.010675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.010683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.010997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.011006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.011231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.011240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.011410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.011418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.011627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.011636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.011945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.011953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.012282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.012292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.012475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.012484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.012818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.012827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.013152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.013160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.013369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.013377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.013759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.013767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.014068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.973 [2024-07-15 09:40:00.014077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.973 qpair failed and we were unable to recover it. 00:31:12.973 [2024-07-15 09:40:00.014375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.014384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.014596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.014605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.014933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.014941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.015263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.015271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.015594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.015603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.015797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.015805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.015977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.015985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.016303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.016312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.016614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.016622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.016917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.016925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.017262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.017271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.017576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.017584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.017788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.017796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.018115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.018123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.018425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.018433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.018754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.018763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.018952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.018961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.019214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.019223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.019545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.019553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.019880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.019888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.020197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.020205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.020486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.020493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.020824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.020833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.021086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.021094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.021446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.021453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.021681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.021689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.021964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.021973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.022291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.022299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.022535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.022542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.022735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.022744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.023017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.023026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.023160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.023167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.023239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.023246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.023461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.023470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.023571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.023579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-07-15 09:40:00.023787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-07-15 09:40:00.023796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.023864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.023873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.024158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.024166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.024307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.024315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.024676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.024685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.025086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.025094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.025411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.025420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.025609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.025618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.025985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.025993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.026327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.026335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.026741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.026749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.027050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.027057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.027279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.027286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.027628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.027636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.027971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.027979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.028178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.028185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.028517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.028525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.028843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.028852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.029215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.029223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.029557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.029566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.029939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.029948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.030152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.030159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.030461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.030469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.030786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.030794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.031029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.031036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.031264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.031272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.031580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.031589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.031856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.031864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.032196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.032205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.032532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.032540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.032844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.032852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.033173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.033181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.033521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.033529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.033887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.033896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.034242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.034251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.034572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.034580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.034907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.034915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.035254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.035263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.035593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.035603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.035926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.035935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.036110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-07-15 09:40:00.036119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-07-15 09:40:00.036416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.036424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.036766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.036775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.037102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.037110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.037464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.037472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.037813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.037821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.038185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.038193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.038503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.038511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.038832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.038842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.039124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.039132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.039327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.039335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.039663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.039670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.039869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.039877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.040256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.040264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.040595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.040603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.040932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.040940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.041314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.041322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.041551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.041559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.041760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.041768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.042066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.042074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.042242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.042250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.042600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.042608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.042953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.042962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.043261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.043269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.043575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.043584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.043905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.043913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.044240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.044248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.044578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.044585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.044911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.044919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.045104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.045112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.045470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.045478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.045778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.045785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.046018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.046025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.046191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.046199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.046384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.046391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.046553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.046562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.046926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.046934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.047248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.047257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.047622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.047631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.047975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.047982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.048290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.048297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.048594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-07-15 09:40:00.048601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-07-15 09:40:00.048898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.048906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.049098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.049106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.049316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.049323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.049644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.049652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.049933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.049941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.050273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.050281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.050626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.050635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.050910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.050918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.051260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.051268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.051587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.051595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.051924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.051932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.052257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.052265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.052592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.052600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.052797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.052805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.053131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.053138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.053447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.053455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.053753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.053761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.054110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.054117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.054444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.054452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.054807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.054815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.055159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.055167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.055329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.055337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.055529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.055537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.055822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.055829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.056160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.056167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.056487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.056495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.056815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.056824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.057156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.057163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.057503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.057511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.057857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.057865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.058214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.058223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.058526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.058534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.058867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.058876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.059070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.059077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.059391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.059400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.059703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.059711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.059909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.059919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.060236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.060244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.060560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.060569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.060734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.060742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-07-15 09:40:00.061063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-07-15 09:40:00.061071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.061416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.061424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.061799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.061809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.062129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.062137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.062332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.062340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.062514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.062522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.062849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.062857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.063192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.063201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.063538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.063546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.063908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.063915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.064238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.064246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.064562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.064570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.064922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.064930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.065305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.065313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.065654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.065662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.065985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.065994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.066334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.066342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.066677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.066686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.066965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.066972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.067135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.067143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.067339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.067347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.067525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.067534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.067819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.067827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.068160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.068167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.068478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.068486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.068797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.068805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.069128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.069137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.069477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.069486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.069829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.069837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.070148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.070157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.070474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.070482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.070675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.070683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.070994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.071002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.071208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.071215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.071521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.071529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.071860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.071868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-07-15 09:40:00.072220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-07-15 09:40:00.072229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.072541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.072550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.072844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.072852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.073163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.073172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.073368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.073376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.073700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.073708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.073858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.073867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.074062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.074069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.074386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.074395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.074601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.074609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.074932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.074940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.075272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.075279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.075594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.075602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.075939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.075947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.076267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.076276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.076468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.076477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.076776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.076785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.077021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.077029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.077366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.077375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.077590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.077598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.077793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.077801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.078046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.078054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.078281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.078288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.078507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.078516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.078814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.078824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.079115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.079124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.079403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.079411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.079488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.079495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.079865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.079874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.080241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.080249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.080388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.080396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.080727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.080735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.080950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.080958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.081087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.081094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.081302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.081310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.081498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.081507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.081768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.081777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.081895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.081905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.082303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.082310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.082515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.082523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.082717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.082727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-07-15 09:40:00.082950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-07-15 09:40:00.082958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.083307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.083315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.083632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.083641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.083977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.083985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.084302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.084311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.084505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.084513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.084853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.084860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.085203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.085211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.085605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.085613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.085919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.085928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.086148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.086155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.086477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.086485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.086785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.086793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.087028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.087036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.087347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.087355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.087647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.087656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.088021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.088030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.088350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.088358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.088679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.088686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.088879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.088887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.089190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.089198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.089512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.089521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.089700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.089708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.089923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.089930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.090259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.090268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.090576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.090584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.090896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.090903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.091234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.091242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.091547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.091554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.091873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.091881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.092231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.092240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.092586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.092594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.092934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.092943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.093153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.093161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.093476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.093485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.093825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.093833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.094180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.094187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.094476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.094484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.094791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.094800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.094974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.094984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.095297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.095304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-07-15 09:40:00.095610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-07-15 09:40:00.095618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.095928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.095937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.096274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.096282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.096586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.096594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.096883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.096891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.097202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.097210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.097522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.097530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.097855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.097862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.098181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.098188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.098571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.098579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.098905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.098914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.099232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.099240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.099562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.099570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.099894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.099902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.100220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.100229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.100536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.100544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.100849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.100857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.101168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.101175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.101475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.101483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.101685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.101693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.102037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.102045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.102337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.102345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.102505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.102514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.102694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.102702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.103059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.103067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.103387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.103395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.103734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.103742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.104080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.104089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.104413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.104421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.104737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.104745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.105089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.105097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.105290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.105299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.105439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.105448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.105781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.105790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.106083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.106091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.106408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.106417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.106781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.106790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.107105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.107114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.107410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.107419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.107734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.107742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.108034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.108043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-07-15 09:40:00.108360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-07-15 09:40:00.108368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.108663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.108671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.108995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.109003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.109321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.109330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.109713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.109721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.110039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.110047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.110371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.110379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.110674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.110682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.110965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.110973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.111254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.111262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.111589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.111598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.111917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.111925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.112267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.112275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.112576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.112583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.112863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.112871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.113161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.113169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.113484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.113492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.113795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.113803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.114124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.114132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.114464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.114474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.114772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.114780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.115128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.115137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.115366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.115374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.115654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.115663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.115950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.115959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.116256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.116265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.116582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.116591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.116915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.116924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.117261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.117271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.117647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.117656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.117849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.117858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.118182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.118191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.118558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.118566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.118870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.118878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.119186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.119195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-07-15 09:40:00.119521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-07-15 09:40:00.119530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.119685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.119694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.120023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.120034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.120355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.120364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.120683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.120693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.121081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.121090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.121380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.121389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.121621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.121630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.121904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.121913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.122192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.122200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.122526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.122535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.122869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.122877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.123218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.123227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.123534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.123543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.123860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.123869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.124181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.124190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.124523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.124532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.124765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.124774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.124924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.124932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.125232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.125241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.125421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.125429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.125782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.125788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.126129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.126135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.126453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.126459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.126755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.126761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.127089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.127095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.127222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.127228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.127507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.127514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.127817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.127823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.128004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.128012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.128317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.128323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.128616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.128623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.128917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.128925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.129185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.129193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.129502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.129510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.129871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.129879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.130228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.130237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.130556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.130565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.130832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.130841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.131134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.131143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.131442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.131450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-07-15 09:40:00.131821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-07-15 09:40:00.131830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.132179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.132187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.132505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.132514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.132865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.132873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.133089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.133098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.133428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.133438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.133763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.133771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.134063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.134071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.134422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.134431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.134744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.134756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.135068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.135077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.135397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.135406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.135698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.135707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.136003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.136012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.136343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.136352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.136660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.136668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.136974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.136983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.137282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.137291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.137609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.137619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.137800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.137809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.138120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.138129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.138474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.138483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.138677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.138685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.138979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.138987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.139311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.139319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.139529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.139537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.139853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.139862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.140199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.140207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.140602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.140612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.140792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.140801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.141114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.141122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.141285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.141294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.141564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.141572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.141866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.141875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.142197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.142206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.142536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.142545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.142858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.142866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-07-15 09:40:00.143215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-07-15 09:40:00.143224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.985 [2024-07-15 09:40:00.143531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-07-15 09:40:00.143540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-07-15 09:40:00.143855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-07-15 09:40:00.143863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-07-15 09:40:00.144175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-07-15 09:40:00.144183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-07-15 09:40:00.144473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-07-15 09:40:00.144480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-07-15 09:40:00.144834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-07-15 09:40:00.144842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-07-15 09:40:00.145010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-07-15 09:40:00.145017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-07-15 09:40:00.145364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-07-15 09:40:00.145372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-07-15 09:40:00.145694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-07-15 09:40:00.145701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-07-15 09:40:00.146018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-07-15 09:40:00.146025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-07-15 09:40:00.146336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-07-15 09:40:00.146344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-07-15 09:40:00.146660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-07-15 09:40:00.146667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-07-15 09:40:00.146867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-07-15 09:40:00.146874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-07-15 09:40:00.147167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-07-15 09:40:00.147174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:13.261 [2024-07-15 09:40:00.147473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.261 [2024-07-15 09:40:00.147482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.261 qpair failed and we were unable to recover it. 00:31:13.261 [2024-07-15 09:40:00.147821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.261 [2024-07-15 09:40:00.147830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.261 qpair failed and we were unable to recover it. 00:31:13.261 [2024-07-15 09:40:00.148135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.261 [2024-07-15 09:40:00.148143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.261 qpair failed and we were unable to recover it. 00:31:13.261 [2024-07-15 09:40:00.148448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.261 [2024-07-15 09:40:00.148457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.261 qpair failed and we were unable to recover it. 00:31:13.261 [2024-07-15 09:40:00.148776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.261 [2024-07-15 09:40:00.148785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.261 qpair failed and we were unable to recover it. 00:31:13.261 [2024-07-15 09:40:00.149099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.261 [2024-07-15 09:40:00.149106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.261 qpair failed and we were unable to recover it. 00:31:13.261 [2024-07-15 09:40:00.149429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.261 [2024-07-15 09:40:00.149436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.261 qpair failed and we were unable to recover it. 00:31:13.261 [2024-07-15 09:40:00.149754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.261 [2024-07-15 09:40:00.149763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.261 qpair failed and we were unable to recover it. 00:31:13.261 [2024-07-15 09:40:00.150042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.261 [2024-07-15 09:40:00.150049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.150242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.150250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.150550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.150559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.150879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.150887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.151210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.151218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.151563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.151571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.151875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.151883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.152172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.152180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.152510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.152518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.152724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.152733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.153051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.153060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.153384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.153393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.153541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.153549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.153831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.153840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.154172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.154180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.154497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.154505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.154780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.154789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.155070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.155079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.155388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.155397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.155711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.155719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.156086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.156093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.156401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.156410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.156702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.156710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.157019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.157028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.157356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.157365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.157672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.157680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.157974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.157982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.158307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.158314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.158613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.158621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.158933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.158941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.159246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.159254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.159348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.159354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.159694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.159703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.160104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.160112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.160418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.160426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.160617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.160625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.160934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.160942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.161231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.161239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.161580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.161588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.161796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.161804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.162103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.262 [2024-07-15 09:40:00.162111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.262 qpair failed and we were unable to recover it. 00:31:13.262 [2024-07-15 09:40:00.162255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.162264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.162581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.162589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.162922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.162930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.163302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.163310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.163650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.163658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.163981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.163989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.164328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.164336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.164660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.164669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.164955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.164964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.165280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.165288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.165581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.165588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.165878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.165885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.166289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.166296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.166597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.166606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.166788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.166797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.167070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.167077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.167267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.167275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.167624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.167632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.167969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.167978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.168139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.168148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.168478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.168485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.168786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.168794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.169113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.169120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.169388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.169396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.169725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.169732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.170042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.170051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.170332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.170340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.170681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.170689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.170868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.170877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.171211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.171218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.171558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.171567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.171877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.171885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.172200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.172208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.172551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.172558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.172854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.172861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.173202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.173209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.173529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.173538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.173851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.173859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.174196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.174205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.174514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.263 [2024-07-15 09:40:00.174522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-07-15 09:40:00.174831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.174839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.175131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.175139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.175316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.175324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.175611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.175619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.175960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.175969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.176159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.176166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.176462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.176470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.176654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.176661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.176967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.176977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.177300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.177307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.177656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.177664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.177989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.177998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.178326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.178334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.178642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.178651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.178717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.178725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.179082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.179090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.179284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.179292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.179608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.179616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.179928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.179936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.180286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.180294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.180475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.180483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.180822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.180830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.181180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.181188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.181527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.181535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.181861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.181869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.182184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.182192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.182494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.182503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.182845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.182853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.183186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.183194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.183534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.183543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.183869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.183877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.184169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.184178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.184367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.184375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.184701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.184710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.185026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.185034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.185386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.185395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.185701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.185709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.186035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.186044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.186279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.186287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.186623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.186631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-07-15 09:40:00.186974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.264 [2024-07-15 09:40:00.186982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.187207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.187214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.187547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.187554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.187857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.187864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.188215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.188222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.188526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.188535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.188853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.188861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.189179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.189188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.189515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.189525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.189742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.189750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.190046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.190054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.190353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.190362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.190679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.190687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.190962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.190970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.191285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.191294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.191633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.191641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.191891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.191899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.192122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.192130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.192310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.192317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.192623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.192631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.192977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.192986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.193301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.193309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.193634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.193642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.193989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.193997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.194306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.194315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.194635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.194642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.194976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.194985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.195198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.195206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.195524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.195533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.195861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.195869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.196179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.196187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.196497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.196505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.196858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.196866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.197213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.197221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.197560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.197568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.197908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.197916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.198255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.198263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.198596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.198604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.198927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.198935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-07-15 09:40:00.199293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.265 [2024-07-15 09:40:00.199300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.199611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.199620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.199936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.199945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.200279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.200288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.200616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.200624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.200939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.200948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.201255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.201262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.201450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.201459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.201770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.201778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.202077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.202086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.202416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.202423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.202621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.202629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.202979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.202987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.203325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.203333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.203530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.203537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.203868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.203876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.204198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.204207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.204520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.204528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.204824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.204833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.205141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.205149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.205464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.205473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.205798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.205806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.206128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.206135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.206456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.206463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.206761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.206768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.207071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.207079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.207282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.207289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.207678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.207685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.207998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.208007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.208439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.208447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.208756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.208765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.209084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.209092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.209483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.209490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.209804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.209821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.210136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.210143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.266 [2024-07-15 09:40:00.210450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.266 [2024-07-15 09:40:00.210458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.266 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.210775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.210783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.211146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.211154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.211527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.211535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.211713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.211721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.211873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.211881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.212189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.212197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.212525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.212534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.212856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.212864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.213221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.213228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.213533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.213540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.213885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.213893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.214245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.214253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.214614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.214621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.214889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.214898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.215241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.215248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.215579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.215587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.215875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.215883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.216200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.216208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.216515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.216525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.216732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.216740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.217014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.217022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.217336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.217345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.217674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.217682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.218020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.218028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.218366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.218375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.218688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.218696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.218980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.218989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.219315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.219324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.219664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.219672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.219993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.220002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.220316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.220324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.220716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.220724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.221038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.221047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.221261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.221270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.221579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.221587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.221907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.221915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.222229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.222237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.222473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.222480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.222788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.222796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.223056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-07-15 09:40:00.223064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.267 qpair failed and we were unable to recover it. 00:31:13.267 [2024-07-15 09:40:00.223379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.223387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.223709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.223716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.224014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.224021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.224368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.224376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.224686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.224695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.225007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.225015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.225338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.225347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.225651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.225659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.225992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.226008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.226326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.226333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.226621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.226628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.226936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.226944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.227248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.227256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.227595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.227604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.227946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.227955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.228282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.228290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.228599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.228607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.229058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.229067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.229383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.229390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.229581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.229588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.229929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.229937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.230267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.230275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.230591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.230599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.230928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.230935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.231262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.231270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.231602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.231609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.231876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.231884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.232249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.232256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.232552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.232559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.232757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.232766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.233065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.233073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.233263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.233271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.233564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.233572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.233895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.233903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.234235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.234242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.234558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.234565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.234903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.234912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.235231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.235239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.235424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.235431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.235754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.235762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.236080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-07-15 09:40:00.236089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.268 qpair failed and we were unable to recover it. 00:31:13.268 [2024-07-15 09:40:00.236423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.236431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.236624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.236632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.236737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.236744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.236967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.236975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.237280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.237287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.237593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.237602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.237836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.237844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.238148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.238156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.238492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.238500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.238843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.238852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.239143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.239151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.239310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.239317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.239672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.239681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.240009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.240018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.240424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.240432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.240667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.240675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.240995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.241003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.241320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.241329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.241655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.241662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.241981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.241990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.242226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.242234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.242542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.242550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.242866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.242873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.243187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.243195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.243390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.243398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.243726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.243734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.243931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.243939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.244248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.244255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.244595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.244603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.244921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.244929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.245243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.245252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.245559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.245567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.245870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.245879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.246193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.246201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.246539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.246548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.246868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.246876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.247199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.247207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.247546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.247554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.247839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.247847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.248186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.248193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.248533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-07-15 09:40:00.248542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.269 qpair failed and we were unable to recover it. 00:31:13.269 [2024-07-15 09:40:00.248875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.248883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.249206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.249215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.249574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.249582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.249956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.249964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.250306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.250314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.250713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.250721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.251041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.251051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.251402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.251410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.251722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.251730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.251924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.251933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.252274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.252283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.252552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.252561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.252886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.252894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.253267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.253274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.253466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.253474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.253779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.253789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.254119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.254126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.254319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.254327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.254629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.254636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.254931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.254938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.255258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.255265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.255463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.255470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.255778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.255786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.256014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.256022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.256343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.256350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.256671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.256680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.256952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.256959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.257304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.257312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.257641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.257648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.257843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.257851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.258181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.258188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.258502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.258510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.258865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.258872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.259187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-07-15 09:40:00.259195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-07-15 09:40:00.259507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.259515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.259861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.259869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.260179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.260187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.260499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.260506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.260820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.260828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.261015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.261022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.261350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.261358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.261671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.261678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.262016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.262024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.262358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.262366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.262689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.262697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.263004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.263012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.263349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.263357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.263658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.263666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.263979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.263986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.264170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.264178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.264514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.264521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.264825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.264834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.265130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.265137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.265453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.265461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.265799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.265807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.266027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.266034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.266353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.266361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.266676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.266685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.266869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.266878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.267219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.267228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.267393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.267402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.267694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.267701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.268044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.268052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.268347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.268354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.268541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.268549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.268860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.268868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.269175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.269183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.269519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.269528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.269862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.269870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.270213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.270220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.270519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.270526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.270850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.270857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.271187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.271195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.271535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.271544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.271872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-07-15 09:40:00.271880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-07-15 09:40:00.272216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.272224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.272446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.272453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.272759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.272766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.273066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.273074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.273413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.273421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.273762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.273772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.274113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.274121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.274436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.274443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.274744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.274755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.275083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.275091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.275434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.275442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.275711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.275719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.276074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.276082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.276408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.276417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.276730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.276737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.277032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.277040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.277396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.277405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.277575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.277582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.277884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.277892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.278239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.278247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.278550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.278557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.278877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.278884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.279212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.279220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.279521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.279529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.279780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.279787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.280124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.280131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.280466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.280475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.280803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.280811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.281114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.281122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.281437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.281445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.281770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.281778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.282094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.282102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.282408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.282415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.282610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.282617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.282936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.282945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.283311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.283320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.283657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.283666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.283958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.283966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.284294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.284302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.284622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.284630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.284959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-07-15 09:40:00.284967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-07-15 09:40:00.285151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.285158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.285486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.285495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.285821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.285830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.286034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.286042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.286385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.286393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.286615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.286623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.286824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.286832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.287184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.287191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.287507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.287514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.287841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.287848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.288171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.288180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.288526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.288534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.288830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.288837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.289160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.289168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.289483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.289490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.289688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.289697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.290007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.290015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.290316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.290323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.290637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.290645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.291016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.291024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.291258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.291265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.291434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.291442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.291720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.291727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.291954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.291962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.292195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.292204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.292506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.292514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.292825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.292833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.293160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.293168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.293484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.293493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.293814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.293822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.294142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.294150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.294467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.294474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.294688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.294696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.294980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.294988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.295330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.295338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.295680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.295689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.295999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.296008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.296336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.296345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.296672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.296681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.296989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.296998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-07-15 09:40:00.297329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-07-15 09:40:00.297337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.297654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.297662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.298016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.298026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.298347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.298355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.298572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.298581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.298833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.298841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.299010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.299018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.299327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.299334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.299650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.299659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.299972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.299980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.300342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.300350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.300690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.300697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.300907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.300914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.301120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.301128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.301366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.301374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.301700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.301708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.301928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.301936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.302253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.302261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.302613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.302620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.302930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.302938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.303281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.303288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.303448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.303455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.303776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.303783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.304099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.304106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.304452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.304460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.304656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.304664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.304989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.304996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.305330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.305339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.305670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.305678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.306006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.306014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.306354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.306363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.306756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.306765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.307049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.307056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.307352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.307360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.307721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-07-15 09:40:00.307729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-07-15 09:40:00.308040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.308048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.308390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.308397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.308580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.308588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.308789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.308797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.309071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.309078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.309403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.309412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.309725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.309733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.309961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.309971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.310292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.310299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.310610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.310619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.310974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.310982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.311298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.311306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.311625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.311633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.311953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.311960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.312155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.312163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.312461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.312469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.312690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.312697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.313010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.313018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.313331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.313339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.313654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.313662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.314004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.314012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.314217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.314225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.314440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.314449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.314777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.314785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.315125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.315133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.315362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.315370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.315686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.315694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.315976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.315984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.316290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.316297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.316607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.316615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.316875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.316883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.317067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.317075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.317391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.317399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.317692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.317699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.318086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.318094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.318432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.318439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.318735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.318743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.319102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.319111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.319440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.319447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.319762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.319770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-07-15 09:40:00.320070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-07-15 09:40:00.320078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.320392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.320400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.320725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.320732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.321062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.321070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.321385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.321393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.321712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.321721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.322039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.322047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.322398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.322408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.322725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.322732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.323049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.323058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.323395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.323403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.323574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.323581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.323875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.323883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.324082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.324090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.324412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.324421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.324804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.324811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.325112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.325120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.325434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.325441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.325743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.325754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.326075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.326082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.326384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.326391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.326766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.326774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.326948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.326955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.327295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.327303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.327639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.327648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.327949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.327957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.328303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.328311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.328625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.328633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.328967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.328976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.329191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.329198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.329525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.329533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.329835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.329842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.330189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.330197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.330521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.330529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.330876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.330885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.331201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.331209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.331535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.331542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.331853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.331861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.332162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.332169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.332522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.332529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.332571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.332578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-07-15 09:40:00.332875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-07-15 09:40:00.332883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.333190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.333198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.333538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.333545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.333850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.333857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.334200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.334211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.334439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.334447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.334617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.334626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.334906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.334914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.335238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.335245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.335552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.335560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.335877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.335885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.336187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.336195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.336494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.336502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.336798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.336805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.336870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.336877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.337099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.337107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.337420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.337429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.337620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.337629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.337940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.337948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.338262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.338270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.338623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.338631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.338917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.338925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.339240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.339248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.339574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.339582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.339910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.339918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.340217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.340225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.340530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.340539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.340840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.340847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.341164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.341172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.341475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.341483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.341794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.341802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.342135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.342144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.342459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.342468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.342814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.342822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.343137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.343145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.343445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.343453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.343662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.343670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.343981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.343990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.344181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.344189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.344474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.344483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.344824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.344831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-07-15 09:40:00.345129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-07-15 09:40:00.345136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.345449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.345458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.345791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.345799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.346138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.346146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.346441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.346449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.346786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.346796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.347103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.347112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.347402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.347410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.347712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.347721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.348065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.348073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.348213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.348221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.348431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.348439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.348747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.348758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.349120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.349128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.349449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.349459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.349784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.349793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.350133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.350141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.350328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.350335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.350618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.350626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.350943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.350951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.351261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.351270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.351620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.351628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.352034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.352042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.352237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.352245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.352583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.352591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.352909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.352917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.353234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.353242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.353542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.353550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.353798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.353806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.354114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.354123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.354461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.354469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.354788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.354795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.355200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.355208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.355526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.355535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.355760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.355769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.356104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-07-15 09:40:00.356114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-07-15 09:40:00.356280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.356288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.356570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.356577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.356893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.356901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.357226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.357235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.357570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.357578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.357790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.357798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.358101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.358109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.358440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.358448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.358759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.358768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.359067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.359083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.359397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.359405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.359721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.359730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.360044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.360053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.360376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.360385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.360665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.360674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.360987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.360996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.361291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.361298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.361637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.361645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.361789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.361796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.361914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.361921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.362207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.362215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.362537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.362545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.362876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.362885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.363173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.363181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.363523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.363531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.363875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.363884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.364220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.364229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.364554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.364562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.364876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.364884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.365191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.365199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.365537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.365545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.365827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.365834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.366154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.366163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.366507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.366515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.366837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.366845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.367174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.367182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.367506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.367513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.367871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.367881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.368182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.368190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.368381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.368389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-07-15 09:40:00.368615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-07-15 09:40:00.368624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.368879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.368887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.369203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.369212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.369520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.369528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.369862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.369870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.370189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.370196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.370512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.370521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.370826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.370834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.371137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.371145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.371446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.371456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.371645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.371652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.371947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.371955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.372270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.372278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.372590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.372598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.372917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.372926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.373257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.373265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.373462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.373469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.373784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.373793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.374131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.374138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.374438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.374446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.374747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.374760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.375063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.375072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.375411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.375418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.375743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.375754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.376088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.376096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.376441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.376449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.376680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.376687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.376995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.377003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.377341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.377350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.377695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.377703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.377995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.378003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.378305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.378313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.378616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.378624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.378943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.378951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.379294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.379302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.379621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.379629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.379951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.379959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.380263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.380270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.380578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.380585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.381016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.381023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.381342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.381350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-07-15 09:40:00.381652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-07-15 09:40:00.381660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.381828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.381836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.382185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.382194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.382534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.382541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.382817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.382824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.383121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.383128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.383452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.383460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.383804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.383811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.384122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.384132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.384477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.384484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.384801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.384809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.385127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.385134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.385217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.385224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.385526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.385533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.385903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.385911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.386084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.386091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.386374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.386381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.386571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.386579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.386886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.386894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.387169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.387176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.387485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.387493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.387872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.387880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.388160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.388168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.388490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.388498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.388826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.388834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.389240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.389248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.389555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.389562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.389882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.389890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.390220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.390228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.390544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.390552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.390867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.390875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.391179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.391187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.391522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.391531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.391868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.391875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.392138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.392146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.392453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.392461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.392804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.392812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.393128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.393135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.393434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.393442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.393765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.393773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.394099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-07-15 09:40:00.394107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-07-15 09:40:00.394419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.394427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.394737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.394745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.395103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.395111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.395455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.395464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.395796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.395804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.396067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.396074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.396402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.396410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.396745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.396757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.397092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.397100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.397398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.397406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.397713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.397722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.398022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.398030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.398221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.398229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.398562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.398570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.398852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.398859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.399066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.399074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.399400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.399408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.399718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.399725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.400044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.400051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.400370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.400378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.400720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.400727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.401056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.401065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.401289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.401298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.401610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.401618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.401932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.401941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.402282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.402290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.402485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.402493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.402795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.402802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.403141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.403148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.403537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.403545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.403870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.403878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.404201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.404208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.404520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.404528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.404844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.404852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.405055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.405063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.405404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.405411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.405708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.405715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.406044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.406052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.406368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.406376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.406655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.406662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.406962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-07-15 09:40:00.406970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-07-15 09:40:00.407290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.407298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.407492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.407499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.407645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.407653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.407948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.407955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.408264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.408271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.408615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.408623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.408904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.408914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.409276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.409285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.409624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.409632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.409943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.409951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.410264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.410272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.410617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.410624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.410903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.410911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.411078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.411087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.411439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.411447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.411788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.411797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.412031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.412039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.412374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.412382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.412674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.412681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.413004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.413011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.413326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.413335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.413647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.413654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.413841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.413849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.414030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.414038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.414396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.414403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.414725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.414732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.415065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.415073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.415379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.415387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.415739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.415747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.415936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.415944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.416283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.416290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.416598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.416607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.416845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.416853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.417175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.417184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.417530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.417538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-07-15 09:40:00.417871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-07-15 09:40:00.417879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.418170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.418178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.418497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.418504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.418851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.418860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.419176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.419184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.419524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.419532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.419859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.419867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.420063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.420071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.420378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.420385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.420673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.420680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.420883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.420892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.421205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.421214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.421554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.421562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.421880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.421888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.422228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.422236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.422539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.422547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.422887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.422895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.423267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.423275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.423601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.423609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.423811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.423819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.424147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.424155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.424472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.424480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.424674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.424681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.425066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.425074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.425402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.425410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.425734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.425743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.426047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.426056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.426369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.426377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.426711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.426720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.427041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.427050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.427264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.427272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.427581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.427590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.427901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.427909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.428097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.428104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.428412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.428420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.428735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.428744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.429059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.429067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.429409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.429418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.429820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.429829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.430141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.430150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.430499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-07-15 09:40:00.430507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-07-15 09:40:00.430716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.430724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.431055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.431064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.431274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.431282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.431599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.431607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.431918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.431926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.432225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.432232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.432626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.432634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.432955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.432962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.433276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.433283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.433583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.433590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.433905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.433915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.434127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.434136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.434467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.434475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.434796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.434804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.435163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.435171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.435468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.435476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.435619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.435628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.435928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.435936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.436251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.436259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.436571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.436578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.436898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.436906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.437229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.437236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.437538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.437546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.437894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.437902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.438183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.438190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.438507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.438515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.438840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.438848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.439189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.439197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.439500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.439508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.439809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.439817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.440139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.440147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.440490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.440497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.440871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.440880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.441067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.441075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.441407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.441415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.441737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.441745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.442065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.442073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.442399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.442408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.442724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.442731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.443057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.443064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.443255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-07-15 09:40:00.443263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-07-15 09:40:00.443576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-07-15 09:40:00.443585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-07-15 09:40:00.443900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-07-15 09:40:00.443908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-07-15 09:40:00.444258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-07-15 09:40:00.444266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-07-15 09:40:00.444575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-07-15 09:40:00.444583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-07-15 09:40:00.444906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-07-15 09:40:00.444914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-07-15 09:40:00.445222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-07-15 09:40:00.445230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-07-15 09:40:00.445577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-07-15 09:40:00.445586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-07-15 09:40:00.445900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-07-15 09:40:00.445908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.446252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.446261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.446603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.446612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.446914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.446922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.447271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.447279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.447592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.447599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.447915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.447922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.448307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.448316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.448542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.448551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.448857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.448865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.449185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.449193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.449533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.449541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.449841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.449848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.450165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.450173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.450513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.450522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.450826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.450834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.451146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.451155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.451485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.451494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.451807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.451816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.452118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.452126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.452465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.452473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.452797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.452805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.562 [2024-07-15 09:40:00.453143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.562 [2024-07-15 09:40:00.453152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.562 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.453482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.453491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.453883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.453892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.454209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.454218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.454543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.454552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.454828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.454835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.455161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.455170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.455349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.455360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.455692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.455700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.455918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.455926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.456223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.456230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.456575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.456583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.456908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.456916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.457234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.457242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.457574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.457582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.457900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.457908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.458169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.458176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.458483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.458491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.458805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.458813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.459085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.459093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.459376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.459384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.459706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.459715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.460092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.460101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.460408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.460417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.460715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.460724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.461031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.461040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.461351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.461359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.461677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.461685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.462001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.462010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.462320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.462328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.462720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.462729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.463039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.463048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.463359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.463368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.463709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.463717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.463903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.463912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.464226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.464234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.464548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.464555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.464874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.464882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.465224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.465232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.465574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.465581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.465859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.563 [2024-07-15 09:40:00.465867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.563 qpair failed and we were unable to recover it. 00:31:13.563 [2024-07-15 09:40:00.466189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.466196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.466363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.466371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.466737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.466745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.467105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.467113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.467265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.467273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.467605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.467614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.467923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.467933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.468265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.468273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.468583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.468590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.468920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.468927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.469252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.469261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.469584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.469592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.469912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.469920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.470234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.470242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.470531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.470539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.470899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.470906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.471235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.471243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.471457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.471465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 902163 Killed "${NVMF_APP[@]}" "$@" 00:31:13.564 [2024-07-15 09:40:00.471771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.471779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.472134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.472141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.472426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.472435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 09:40:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.472722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.472730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 09:40:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:13.564 [2024-07-15 09:40:00.473074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.473082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 09:40:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:13.564 09:40:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:13.564 [2024-07-15 09:40:00.473386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.473394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 09:40:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:13.564 [2024-07-15 09:40:00.473704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.473713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.474054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.474062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.474390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.474399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.474722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.474730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.475022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.475029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.475350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.475357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.475764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.475772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.476065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.476072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.476381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.476387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.476748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.476758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.476991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.476999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.477219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.477226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.477412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.477420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.477772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.564 [2024-07-15 09:40:00.477779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.564 qpair failed and we were unable to recover it. 00:31:13.564 [2024-07-15 09:40:00.478014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.478023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.478323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.478331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.478626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.478633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.478945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.478953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.479273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.479279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.479592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.479600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.479904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.479912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.480240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.480248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.480576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.480585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.480733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.480742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 09:40:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=903114 00:31:13.565 [2024-07-15 09:40:00.481023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.481033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 09:40:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 903114 00:31:13.565 [2024-07-15 09:40:00.481354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.481363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.481686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.481695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 09:40:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 903114 ']' 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 09:40:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:13.565 09:40:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.565 [2024-07-15 09:40:00.482027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.482037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 09:40:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:13.565 [2024-07-15 09:40:00.482232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.482242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 09:40:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.565 [2024-07-15 09:40:00.482601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.482613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 09:40:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:13.565 09:40:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:13.565 [2024-07-15 09:40:00.482934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.482945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.483148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.483157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.483610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.483624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.483954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.483963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.484170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.484179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.484499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.484507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.484799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.484808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.485135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.485144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.485355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.485363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.485624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.485633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.485973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.485982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.486307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.486316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.486643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.486652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.487022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.487030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.487321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.487330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.487596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.487604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.487813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.487822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.488096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.488105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.488417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.488425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.488733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.565 [2024-07-15 09:40:00.488743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.565 qpair failed and we were unable to recover it. 00:31:13.565 [2024-07-15 09:40:00.489043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.489052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.489366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.489375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.489531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.489541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.489837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.489848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.490175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.490184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.490502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.490512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.490702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.490710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.491035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.491044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.491355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.491364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.491673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.491681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.491872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.491882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.492101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.492110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.492405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.492413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.492648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.492657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.493052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.493061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.493323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.493333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.493493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.493502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.493822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.493830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.494177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.494185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.494510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.494518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.494827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.494836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.495182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.495190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.495364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.495374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.495451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.495459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.495791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.495800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.496097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.496105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.496312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.496320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.496503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.496511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.496749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.496760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.497066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.497074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.497413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.497421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.497762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.497771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.497957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.566 [2024-07-15 09:40:00.497965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.566 qpair failed and we were unable to recover it. 00:31:13.566 [2024-07-15 09:40:00.498290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.498298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.498627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.498635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.498917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.498925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.499236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.499244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.499594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.499601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.499801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.499810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.500122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.500129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.500423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.500430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.500819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.500826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.500996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.501003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.501546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.501636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.502072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.502110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.502494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.502541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.502898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.502908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.503003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.503009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.503249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.503256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.503629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.503637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.503953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.503961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.504313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.504321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.504662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.504669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.504864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.504872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.505174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.505181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.505539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.505546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.505885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.505893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.506244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.506251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.506541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.506548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.506879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.506887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.507222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.507229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.507586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.507594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.507783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.507791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.507972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.507979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.508306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.508314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.508630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.508639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.508961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.508968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.509272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.509280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.509625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.509632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.509939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.509947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.510270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.510278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.510623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.510632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.511017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.511025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.567 qpair failed and we were unable to recover it. 00:31:13.567 [2024-07-15 09:40:00.511371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.567 [2024-07-15 09:40:00.511378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.511547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.511556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.511907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.511915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.512240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.512247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.512444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.512452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.512575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.512582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.512888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.512896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.513224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.513231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.513561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.513568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.513900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.513908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.514237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.514245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.514369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.514376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.514659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.514669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.514993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.515000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.515200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.515209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.515549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.515557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.515887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.515895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.516226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.516233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.516577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.516585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.516904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.516912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.517256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.517263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.517619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.517627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.517827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.517835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.518177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.518184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.518481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.518489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.518688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.518695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.519031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.519039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.519340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.519348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.519533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.519541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.519856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.519863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.520096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.520104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.520425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.520432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.520725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.520733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.520942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.520950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.521148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.521155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.521472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.521480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.521793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.521801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.522129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.522138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.522462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.522470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.522654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.522662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.523031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.568 [2024-07-15 09:40:00.523039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.568 qpair failed and we were unable to recover it. 00:31:13.568 [2024-07-15 09:40:00.523348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.523357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.523709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.523717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.524089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.524097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.524411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.524419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.524718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.524726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.525048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.525055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.525377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.525384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.525700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.525708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.525935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.525943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.526137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.526153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.526486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.526494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.526812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.526824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.527143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.527152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.527347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.527355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.527665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.527673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.527957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.527964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.528277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.528285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.528552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.528560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.528865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.528873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.529209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.529216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.529522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.529531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.529878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.529885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.530041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.530050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.530218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.530227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.530559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.530567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.530863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.530871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.531250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.531258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.531434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.531442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.531773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.531781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.532104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.532111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.532443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.532450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.532749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.532767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.532996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.533004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.533327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.533335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.533710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.533718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.534045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.534053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.534211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.534220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.534544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.534552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.534829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.534837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.535049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.535057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.535420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.569 [2024-07-15 09:40:00.535428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.569 qpair failed and we were unable to recover it. 00:31:13.569 [2024-07-15 09:40:00.535766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.535775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.536107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.536115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.536400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.536408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.536740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.536747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.537053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.537062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.537288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.537297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.537474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.537482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.537667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.537676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.538040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.538049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.538368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.538376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.538694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.538704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.539028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.539035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.539323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.539332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.539600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.539608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.539954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.539962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.540168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.540176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.540372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.540379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.540594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.540602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.540770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.540778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.541077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.541084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.541386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.541394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.541714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.541722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.542041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.542050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.542372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.542380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.542678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.542687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.543010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.543018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.543287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.543295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.543614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.543623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.543980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.543987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.544222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.544229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.544549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.544556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.544818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.544826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.545180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.545187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.545504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.545512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.545791] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:31:13.570 [2024-07-15 09:40:00.545840] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.570 [2024-07-15 09:40:00.545850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.545859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.546185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.546192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.546485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.546492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.546695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.546703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.570 [2024-07-15 09:40:00.547042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.570 [2024-07-15 09:40:00.547051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.570 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.547222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.547230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.547462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.547470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.547652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.547660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.548013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.548022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.548336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.548345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.548690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.548698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.548902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.548910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.549247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.549256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.549512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.549520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.549739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.549747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.550065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.550075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.550424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.550433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.550759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.550768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.550991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.551000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.551329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.551337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.551529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.551537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.551702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.551710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.552005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.552014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.552162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.552171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.552516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.552525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.552874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.552882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.553155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.553165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.553348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.553356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.553667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.553676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.553981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.553991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.554347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.554355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.554722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.554731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.555043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.555052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.555379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.555388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.555726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.555735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.556080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.556090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.556435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.556443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.556688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.556697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.556886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.556895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.557100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.557108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.571 qpair failed and we were unable to recover it. 00:31:13.571 [2024-07-15 09:40:00.557453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.571 [2024-07-15 09:40:00.557461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.557785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.557794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.558137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.558147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.558331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.558340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.558666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.558674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.558993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.559001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.559336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.559345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.559658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.559666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.559915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.559924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.560115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.560124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.560316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.560324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.560522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.560531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.560874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.560882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.561233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.561243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.561578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.561587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.561916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.561926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.562147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.562156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.562492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.562499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.562804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.562813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.563132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.563140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.563472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.563480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.563794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.563802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.564125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.564133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.564341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.564349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.564667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.564675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.564999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.565007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.565181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.565190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.565496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.565505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.565864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.565871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.566164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.566172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.566506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.566514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.566830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.566838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.567012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.567021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.567240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.567248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.567529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.567537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.567819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.567828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.568063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.568071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.568368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.568376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.568687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.568695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.568992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.569000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.569356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.572 [2024-07-15 09:40:00.569364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.572 qpair failed and we were unable to recover it. 00:31:13.572 [2024-07-15 09:40:00.569693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.569701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.569871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.569879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.570080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.570088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.570440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.570448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.570798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.570806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.571127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.571136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.571481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.571489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.571829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.571837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.572133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.572141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.572445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.572454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.572819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.572828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.573140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.573148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.573478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.573485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.573823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.573831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.574142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.574152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.574502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.574510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.574706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.574714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.574903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.574911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.575296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.575304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.575604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.575612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.575922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.575931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.576280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.576288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.576615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.576623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.576873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.576881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.577081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.577089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.577419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.577427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.577762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.577770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.578088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.578096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.578411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.578420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.578760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.578769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.579115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.579123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.579319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.579327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.579654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.579662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.579944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.579952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.580276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.580284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.580493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.580501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.580807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.580816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.581164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.581173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.581489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.581498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.581682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.581690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 [2024-07-15 09:40:00.582032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.573 [2024-07-15 09:40:00.582040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.573 qpair failed and we were unable to recover it. 00:31:13.573 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.574 [2024-07-15 09:40:00.582356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.582365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.582558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.582566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.582871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.582879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.583184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.583191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.583494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.583502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.583816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.583824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.584027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.584035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.584325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.584333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.584528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.584536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.584907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.584915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.585226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.585234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.585570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.585577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.585927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.585935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.586251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.586261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.586491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.586500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.586819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.586827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.586998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.587007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.587335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.587343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.587630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.587638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.587976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.587984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.588327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.588335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.588483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.588491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.588804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.588811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.589140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.589149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.589466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.589473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.589760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.589769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.590126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.590135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.590448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.590455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.590777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.590785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.590983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.590991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.591187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.591194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.591523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.591530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.591833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.591840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.592182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.592188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.592489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.592495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.592812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.592819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.593105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.593112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.593454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.593462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.593772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.593779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.594114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.574 [2024-07-15 09:40:00.594120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.574 qpair failed and we were unable to recover it. 00:31:13.574 [2024-07-15 09:40:00.594454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.594461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.594641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.594648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.594902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.594911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.595242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.595250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.595572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.595579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.595764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.595772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.596060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.596066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.596393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.596400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.596721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.596728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.597065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.597073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.597421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.597428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.597755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.597762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.598093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.598100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.598425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.598434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.598773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.598779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.598970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.598977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.599357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.599363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.599675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.599682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.600006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.600013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.600311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.600318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.600646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.600652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.600942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.600949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.601265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.601272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.601447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.601455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.601631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.601638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.601918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.601926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.602231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.602238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.602414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.602421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.602723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.602729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.603044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.603051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.603353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.603360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.603685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.603692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.603879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.603886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.604168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.604175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.604368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.575 [2024-07-15 09:40:00.604374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.575 qpair failed and we were unable to recover it. 00:31:13.575 [2024-07-15 09:40:00.604690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.604696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.605017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.605024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.605334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.605340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.605525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.605532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.605919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.605925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.606249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.606255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.606564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.606570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.606863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.606870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.607207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.607213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.607520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.607526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.607793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.607802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.608108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.608114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.608457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.608464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.608770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.608777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.609088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.609095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.609420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.609426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.609763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.609770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.609951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.609958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.610347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.610356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.610705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.610712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.611027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.611034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.611343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.611349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.611647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.611653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.611857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.611864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.612193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.612199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.612503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.612510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.612828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.612834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.613135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.613142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.613314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.613321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.613624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.613631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.613931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.613938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.614246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.614253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.614583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.614590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.614781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.614788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.615135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.615141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.615367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.615374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.615749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.615763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.615955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.615962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.616303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.616309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.616612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.616618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.576 [2024-07-15 09:40:00.616932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.576 [2024-07-15 09:40:00.616939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.576 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.617277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.617284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.617605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.617612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.617989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.617996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.618291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.618299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.618651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.618658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.618991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.618998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.619231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.619238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.619578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.619584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.619936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.619942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.620282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.620288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.620585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.620592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.620784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.620791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.621167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.621173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.621361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.621369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.621695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.621701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.622016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.622022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.622419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.622426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.622721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.622729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.623087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.623094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.623439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.623447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.623793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.623800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.624122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.624128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.624229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:13.577 [2024-07-15 09:40:00.624434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.624443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.624759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.624767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.624980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.624987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.625359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.625366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.625590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.625597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.625916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.625923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.626247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.626254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.626553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.626560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.626874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.626884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.627174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.627181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.627386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.627393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.627719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.627725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.628045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.628052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.628440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.628448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.628795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.628803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.629123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.629131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.629437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.629445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.629740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.629747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.577 qpair failed and we were unable to recover it. 00:31:13.577 [2024-07-15 09:40:00.630074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.577 [2024-07-15 09:40:00.630081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.630251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.630258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.630645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.630652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.630949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.630956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.631257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.631264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.631471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.631479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.631829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.631837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.632149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.632155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.632454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.632460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.632794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.632801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.632966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.632973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.633343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.633350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.633550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.633557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.633872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.633878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.634226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.634233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.634556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.634564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.634890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.634897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.635222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.635230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.635538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.635545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.635872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.635879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.636205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.636212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.636535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.636542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.636921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.636928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.637226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.637232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.637620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.637627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.637775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.637783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.638084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.638092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.638276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.638283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.638705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.638711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.639011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.639018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.639330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.639339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.639666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.639674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.640011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.640018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.640343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.640351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.640707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.640715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.640880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.640888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.641226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.641233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.641425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.641432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.641732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.641739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.642122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.642129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.642449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.578 [2024-07-15 09:40:00.642456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.578 qpair failed and we were unable to recover it. 00:31:13.578 [2024-07-15 09:40:00.642650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.642657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.642978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.642985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.643158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.643165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.643567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.643573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.643895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.643902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.644222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.644228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.644528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.644535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.644834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.644841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.645170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.645177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.645463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.645470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.645793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.645801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.646092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.646099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.646403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.646410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.646736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.646743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.647071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.647077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.647376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.647384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.647700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.647708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.648012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.648019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.648342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.648349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.648651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.648658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.648952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.648960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.649268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.649275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.649552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.649559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.649880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.649887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.650246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.650253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.650572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.650579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.650924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.650932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.651267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.651273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.651607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.651615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.651819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.651829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.652146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.652153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.652529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.652537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.652843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.652850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.579 [2024-07-15 09:40:00.653173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.579 [2024-07-15 09:40:00.653180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.579 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.653502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.653508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.653698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.653704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.654000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.654006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.654318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.654324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.654549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.654556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.654776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.654783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.655166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.655172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.655553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.655560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.655732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.655739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.656077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.656084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.656435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.656441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.656566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.656573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.657013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.657020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.657340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.657346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.657656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.657663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.657978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.657985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.658329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.658336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.658649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.658657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.658907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.658914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.659078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.659085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.659477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.659484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.659831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.659840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.660172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.660180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.660491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.660498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.660922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.660931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.661259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.661267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.661620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.661628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.661942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.661950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.662152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.662161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.662520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.662527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.662710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.662716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.663068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.663075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.663251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.663258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.663563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.663570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.663876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.663885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.664261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.664271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.664667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.664674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.664917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.664924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.665270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.665278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.665596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.665603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.580 qpair failed and we were unable to recover it. 00:31:13.580 [2024-07-15 09:40:00.665934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.580 [2024-07-15 09:40:00.665941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.666246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.666253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.666438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.666446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.666781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.666788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.667021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.667028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.667386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.667393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.667714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.667721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.668038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.668045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.668365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.668372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.668680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.668687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.668918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.668926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.669264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.669271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.669600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.669607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.669916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.669924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.670229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.670237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.670569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.670577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.670798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.670805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.671119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.671126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.671447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.671454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.671653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.671662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.671991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.671998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.672301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.672308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.672638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.672645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.672985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.672992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.673235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.673242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.673569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.673577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.673778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.673784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.674066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.674073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.674418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.674424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.674721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.674728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.674911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.674919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.675201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.675208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.675414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.675421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.675746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.675758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.676044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.676051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.676354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.676362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.676558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.676566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.676818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.676825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.677172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.677179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.677492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.677499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.677804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.581 [2024-07-15 09:40:00.677810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.581 qpair failed and we were unable to recover it. 00:31:13.581 [2024-07-15 09:40:00.678021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.678028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.678352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.678359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.678665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.678673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.678973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.678981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.679199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.679205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.679529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.679535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.679903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.679910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.680212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.680219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.680560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.680567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.680872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.680879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.681194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.681201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.681526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.681534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.681666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.681673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.681960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.681967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.682295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.682302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.682642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.682648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.682940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.682946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.683293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.683301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.683616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.683622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.683793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.683800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.684091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.684098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.684305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.684312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.684709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.684715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.685034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.685041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.685236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.685243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.685552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.685559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.685867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.685874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.686183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.686189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.686484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.686492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.686796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.686802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.686974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.686981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.687091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff8800 is same with the state(5) to be set 00:31:13.582 [2024-07-15 09:40:00.687607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.687696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.688285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.688374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.688721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.688729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.689158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.689165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.689502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.689509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.689847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.689854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.690192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.690199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.690576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.690583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.691028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.582 [2024-07-15 09:40:00.691035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.582 qpair failed and we were unable to recover it. 00:31:13.582 [2024-07-15 09:40:00.691233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.691240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.691637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.691645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.692020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.692027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.692326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.692332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.692617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.692624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 [2024-07-15 09:40:00.692613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.692643] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.583 [2024-07-15 09:40:00.692652] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:13.583 [2024-07-15 09:40:00.692660] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:13.583 [2024-07-15 09:40:00.692667] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.583 [2024-07-15 09:40:00.692836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.692845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.693135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.693141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.693335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.693343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.693332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:31:13.583 [2024-07-15 09:40:00.693576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.693583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.693489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:31:13.583 [2024-07-15 09:40:00.693611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:31:13.583 [2024-07-15 09:40:00.693613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:31:13.583 [2024-07-15 09:40:00.693984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.693991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.694328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.694334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.694633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.694639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.694961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.694968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.695181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.695196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.695546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.695553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.695727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.695734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.696114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.696121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.696442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.696450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.696629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.696635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.696844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.696851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.697176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.697183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.697409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.697415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.697755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.697762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.697991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.697998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.698123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.698130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.698251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.698257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.698560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.698567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.698794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.698801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.699009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.699015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.699410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.699416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.699805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.699812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.700045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.700052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.700370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.583 [2024-07-15 09:40:00.700377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.583 qpair failed and we were unable to recover it. 00:31:13.583 [2024-07-15 09:40:00.700527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.700533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.700795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.700802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.701098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.701105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.701292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.701298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.701557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.701564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.701796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.701803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.702110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.702117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.702444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.702450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.702654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.702661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.702736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.702743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.703109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.703116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.703444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.703451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.703770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.703777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.704116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.704122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.704346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.704353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.704546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.704553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.704746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.704758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.705055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.705062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.705299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.705307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.705502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.705509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.705842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.705849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.706059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.706066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.706500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.706507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.706727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.706733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.706906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.706917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.707154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.707161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.707448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.707454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.707787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.707795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.708121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.708128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.708426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.708434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.708779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.708787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.709111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.709119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.709442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.709450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.709764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.709772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.709872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.709879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.710158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.710165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.710479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.710486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.710793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.710800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.710888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.710894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.584 [2024-07-15 09:40:00.711092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.584 [2024-07-15 09:40:00.711099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.584 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.711412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.711419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.711597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.711604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.711978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.711986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.712210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.712218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.712437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.712444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.712821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.712828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.713124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.713133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.713422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.713429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.713815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.713824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.714029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.714036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.714210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.714216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.714413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.714420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.714754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.714762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.714935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.714943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.715337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.715346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.715720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.715727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.716036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.716043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.716221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.716229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.716512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.716519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.716829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.716837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.717034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.717042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.717429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.717435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.717754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.717761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.718080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.718087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.718404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.718413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.718739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.718747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.719080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.719088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.719389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.719396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.719720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.719727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.720045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.720052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.720399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.720406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.720495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.720502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.720832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.720839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.721069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.721076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.721288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.721294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.721620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.721626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.721934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.721941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.722266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.722273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.722602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.722609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.722933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.722940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.585 [2024-07-15 09:40:00.723247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.585 [2024-07-15 09:40:00.723254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.585 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.723565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.723573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.724019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.724026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.724222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.724228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.724505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.724511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.724736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.724743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.724937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.724944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.725239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.725246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.725295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.725300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.725576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.725583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.725789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.725804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.726141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.726148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.726507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.726515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.726907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.726914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.727247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.727253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.727439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.727445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.727810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.727817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.728029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.728036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.728309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.728316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.728645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.728651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.728978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.728985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.729044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.729050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.729336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.729342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.729646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.729654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.729719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.729728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.730011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.730019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.730211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.730219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.730527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.730534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.730808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.730814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.731099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.731105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.731307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.731314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.731487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.731494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.731669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.731676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.731970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.731977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.732308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.732314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.732516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.732523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.732870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.732877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.733272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.733279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.733517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.733523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.733823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.733830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.733985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.733992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.586 [2024-07-15 09:40:00.734289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.586 [2024-07-15 09:40:00.734295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.586 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.734611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.734617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.734930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.734937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.735114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.735121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.735344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.735350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.735713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.735720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.736012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.736019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.736195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.736202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.736484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.736491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.736817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.736824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.737170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.737177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.737510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.737517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.737850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.737857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.738249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.738255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.738552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.738559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.738708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.738714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.739098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.739105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.739277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.739284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.739603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.739611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.739959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.739966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.740269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.740275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.740607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.740614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.740918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.740924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.741263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.741269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.741437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.741445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.741846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.741853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.742192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.742199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.742528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.742535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.742734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.742741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.742927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.742934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.743234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.743241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.743573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.743580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.743919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.743927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.744236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.744242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.744397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.744404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.744787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.744794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.745105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.745111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.745452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.745458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.745532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.745538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.745854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.745861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.746056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.587 [2024-07-15 09:40:00.746063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.587 qpair failed and we were unable to recover it. 00:31:13.587 [2024-07-15 09:40:00.746113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.588 [2024-07-15 09:40:00.746119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.588 qpair failed and we were unable to recover it. 00:31:13.588 [2024-07-15 09:40:00.746465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.588 [2024-07-15 09:40:00.746471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.588 qpair failed and we were unable to recover it. 00:31:13.588 [2024-07-15 09:40:00.746783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.588 [2024-07-15 09:40:00.746790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.588 qpair failed and we were unable to recover it. 00:31:13.869 [2024-07-15 09:40:00.747131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.869 [2024-07-15 09:40:00.747138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.869 qpair failed and we were unable to recover it. 00:31:13.869 [2024-07-15 09:40:00.747324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.869 [2024-07-15 09:40:00.747332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.869 qpair failed and we were unable to recover it. 00:31:13.869 [2024-07-15 09:40:00.747666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.869 [2024-07-15 09:40:00.747673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.869 qpair failed and we were unable to recover it. 00:31:13.869 [2024-07-15 09:40:00.748002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.869 [2024-07-15 09:40:00.748009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.869 qpair failed and we were unable to recover it. 00:31:13.869 [2024-07-15 09:40:00.748303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.869 [2024-07-15 09:40:00.748310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.869 qpair failed and we were unable to recover it. 00:31:13.869 [2024-07-15 09:40:00.748352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.869 [2024-07-15 09:40:00.748358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.869 qpair failed and we were unable to recover it. 00:31:13.869 [2024-07-15 09:40:00.748652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.869 [2024-07-15 09:40:00.748661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.869 qpair failed and we were unable to recover it. 00:31:13.869 [2024-07-15 09:40:00.748981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.869 [2024-07-15 09:40:00.748987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.869 qpair failed and we were unable to recover it. 00:31:13.869 [2024-07-15 09:40:00.749384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.869 [2024-07-15 09:40:00.749390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.749586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.749592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.749921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.749928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.750112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.750119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.750457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.750464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.750655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.750662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.750916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.750923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.751309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.751315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.751501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.751507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.751673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.751680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.752019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.752026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.752322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.752329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.752666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.752674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.752844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.752850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.753207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.753215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.753424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.753431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.753496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.753503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.753812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.753820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.754148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.754154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.754336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.754344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.754628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.754636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.754793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.754800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.755031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.755038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.755362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.755369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.755692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.755698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.756117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.756124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.756445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.756453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.756622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.756630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.756966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.756973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.757301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.757308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.757623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.757629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.757956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.757962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.758012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.758017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.758175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.758182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.758557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.758564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.758715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.758722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.759093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.759100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.759265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.759272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.759691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.759699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.760082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.760089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.870 qpair failed and we were unable to recover it. 00:31:13.870 [2024-07-15 09:40:00.760245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.870 [2024-07-15 09:40:00.760251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.760408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.760414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.760757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.760764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.761100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.761107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.761414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.761421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.761623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.761629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.762083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.762090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.762399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.762406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.762598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.762604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.762922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.762928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.763158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.763165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.763350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.763358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.763742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.763749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.763947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.763954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.764265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.764271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.764591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.764597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.764791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.764799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.765121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.765128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.765445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.765452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.765643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.765650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.765844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.765850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.766086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.766093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.766414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.766421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.766593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.766600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.766824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.766831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.767134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.767141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.767449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.767456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.767744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.767754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.768148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.768154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.768337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.768344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.768675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.768681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.768880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.768888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.769201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.769208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.769516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.769523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.769562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.769568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.769848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.769855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.770183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.770190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.770229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.770234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.770407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.770415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.770658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.770664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.871 [2024-07-15 09:40:00.770947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-07-15 09:40:00.770953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.871 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.771175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.771181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.771422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.771429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.771757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.771764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.772080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.772087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.772392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.772399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.772712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.772719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.773018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.773025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.773205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.773213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.773493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.773499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.773817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.773824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.773989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.773996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.774271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.774278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.774451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.774457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.774706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.774712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.775021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.775027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.775333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.775339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.775638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.775644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.775822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.775829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.776125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.776131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.776522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.776528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.776696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.776703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.776964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.776971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.777189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.777195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.777478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.777484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.777819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.777826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.778004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.778011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.778298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.778304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.778629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.778635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.778795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.778803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.779109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.779115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.779306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.779313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.779414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.779421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.779710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.779717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.780044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.780051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.780240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.780247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.780602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.780609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.780780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.780793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.781098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.781106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.781415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.781421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.781741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-07-15 09:40:00.781748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.872 qpair failed and we were unable to recover it. 00:31:13.872 [2024-07-15 09:40:00.782054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.782062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.782373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.782379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.782677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.782683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.783042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.783049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.783224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.783231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.783275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.783283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.783584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.783591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.783906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.783913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.784239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.784246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.784465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.784472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.784818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.784825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.785151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.785158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.785465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.785471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.785651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.785658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.785819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.785826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.786133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.786139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.786365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.786372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.786541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.786547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.786864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.786871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.787204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.787210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.787535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.787541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.787840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.787846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.788027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.788035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.788330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.788336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.788691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.788698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.789029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.789036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.789243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.789256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.789433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.789440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.789828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.789834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.790171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.790178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.790464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.790470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.790775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.790781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.790879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.790885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.791092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.791099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.791435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.791442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.791765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.791772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.791923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.791930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.791968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.791981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.792312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.792318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.873 [2024-07-15 09:40:00.792502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-07-15 09:40:00.792509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.873 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.792822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.792830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.793032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.793039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.793338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.793345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.793532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.793538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.793861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.793868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.794047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.794054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.794232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.794238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.794321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.794327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.794650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.794656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.794857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.794864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.795073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.795079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.795468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.795474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.795782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.795789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.795961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.795967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.796331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.796337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.796641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.796648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.797058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.797065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.797233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.797240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.797543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.797550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.797894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.797901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.798082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.798090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.798385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.798391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.798589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.798596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.798773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.798780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.799069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.799075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.799282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.799289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.799512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.799518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.799821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.799828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.800007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.800014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.800291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.800298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.800479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.800487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.800809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.800816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.800994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.874 [2024-07-15 09:40:00.801002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.874 qpair failed and we were unable to recover it. 00:31:13.874 [2024-07-15 09:40:00.801185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.801191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.801383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.801390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.801684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.801691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.801981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.801988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.802163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.802172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.802214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.802220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.802379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.802385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.802611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.802618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.802936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.802943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.803293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.803300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.803498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.803505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.803844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.803851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.804167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.804173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.804346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.804352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.804442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.804448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.804754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.804761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.805081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.805087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.805395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.805401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.805561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.805567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.805783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.805790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.806097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.806103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.806409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.806416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.806773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.806781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.807109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.807116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.807261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.807267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.807552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.807558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.807597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.807604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.807971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.807977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.808284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.808290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.808451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.808458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.808683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.808690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.808886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.808894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.809250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.809257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.809563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.809570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.809898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.809905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.810074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.810080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.810367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.810374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.810704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.810711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.811033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.811041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.811325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.811332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.811534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.875 [2024-07-15 09:40:00.811541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.875 qpair failed and we were unable to recover it. 00:31:13.875 [2024-07-15 09:40:00.811845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.811852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.812161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.812167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.812342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.812348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.812631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.812640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.812807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.812814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.813119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.813125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.813456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.813463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.813625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.813632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.813875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.813881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.814192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.814198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.814542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.814549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.814693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.814699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.814781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.814788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.815089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.815096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.815419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.815427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.815754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.815761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.815959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.815974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.816308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.816314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.816490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.816497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.816845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.816852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.817049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.817056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.817423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.817430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.817583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.817589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.818017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.818025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.818331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.818338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.818536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.818543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.818880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.818887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.819099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.819106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.819267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.819273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.819466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.819474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.819835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.819842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.820060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.820067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.820276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.820283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.820528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.820534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.820833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.820841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.821054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.821061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.821390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.821397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.821739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.821746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.822071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.822079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.822403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.822410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.876 [2024-07-15 09:40:00.822747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.876 [2024-07-15 09:40:00.822758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.876 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.823046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.823052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.823385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.823392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.823565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.823573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.823757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.823764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.824113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.824119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.824509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.824516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.824824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.824831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.825168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.825175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.825475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.825481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.825800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.825807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.826024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.826031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.826364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.826371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.826672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.826678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.826973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.826979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.827143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.827150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.827398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.827404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.827639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.827647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.827964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.827971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.828150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.828157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.828539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.828545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.828838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.828845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.829180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.829186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.829514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.829520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.829704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.829711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.830005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.830011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.830311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.830317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.830635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.830641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.831041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.831047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.831354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.831360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.831637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.831643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.832084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.832091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.832264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.832272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.832561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.832567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.832878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.832885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.833062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.833070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.833242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.833249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.833420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.833427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.833777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.833784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.834106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.834112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.834419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.834426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.877 qpair failed and we were unable to recover it. 00:31:13.877 [2024-07-15 09:40:00.834731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.877 [2024-07-15 09:40:00.834738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.835059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.835066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.835208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.835217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.835589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.835596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.835784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.835792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.836171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.836178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.836495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.836501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.836828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.836835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.837128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.837134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.837458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.837464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.837628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.837634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.837920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.837927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.838130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.838137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.838361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.838368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.838700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.838707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.838883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.838891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.839186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.839192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.839492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.839498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.839673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.839681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.839843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.839851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.840190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.840196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.840371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.840378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.840670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.840677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.840966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.840973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.841271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.841278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.841546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.841552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.841895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.841902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.842241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.842247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.842470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.842477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.842824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.842831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.843162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.843168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.843332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.843338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.843599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.843606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.843875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.878 [2024-07-15 09:40:00.843882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.878 qpair failed and we were unable to recover it. 00:31:13.878 [2024-07-15 09:40:00.844131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.844138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.844471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.844478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.844783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.844790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.845122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.845129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.845453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.845460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.845812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.845819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.846118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.846125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.846163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.846169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.846473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.846482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.846831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.846838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.847140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.847147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.847320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.847327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.847664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.847670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.847906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.847913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.848244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.848250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.848588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.848596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.848817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.848824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.849000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.849007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.849367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.849373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.849679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.849685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.849986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.849992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.850297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.850304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.850353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.850359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.850402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.850408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.850735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.850742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.850941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.850949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.851160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.851167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.851344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.851351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.851630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.851636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.851815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.851823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.852182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.852188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.852359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.852366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.852700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.852707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.852932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.852939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.853110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.853117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.853518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.853525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.853759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.853766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.853954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.853960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.854248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.854254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.854592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.854599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.879 [2024-07-15 09:40:00.854768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.879 [2024-07-15 09:40:00.854775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.879 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.855059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.855066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.855244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.855251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.855540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.855546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.855942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.855949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.856261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.856268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.856560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.856566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.856911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.856918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.857263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.857271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.857452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.857459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.857820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.857827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.858003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.858011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.858340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.858347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.858645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.858651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.858964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.858970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.859144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.859151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.859436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.859443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.859749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.859760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.860063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.860069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.860268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.860275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.860616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.860623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.860937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.860944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.861122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.861129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.861427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.861433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.861597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.861604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.861885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.861892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.862206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.862212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.862260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.862266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.862627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.862634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.862872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.862879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.863217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.863224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.863521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.863527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.863724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.863731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.864086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.864093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.864398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.864404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.864600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.864608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.864931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.864938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.865271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.865278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.865682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.865689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.865997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.866003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.866196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.866202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.880 [2024-07-15 09:40:00.866495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.880 [2024-07-15 09:40:00.866502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.880 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.866693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.866699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.866867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.866874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.867276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.867283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.867587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.867593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.867898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.867905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.868234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.868240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.868540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.868549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.868764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.868772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.868970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.868978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.869204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.869211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.869528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.869534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.869840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.869846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.870015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.870021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.870428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.870434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.870731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.870738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.870959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.870966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.871309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.871316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.871360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.871366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.871686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.871693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.872080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.872087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.872393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.872400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.872714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.872721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.873050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.873057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.873268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.873275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.873597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.873605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.873951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.873958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.874142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.874149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.874358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.874366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.874672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.874679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.875012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.875019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.875322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.875330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.875657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.875664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.875826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.875833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.876042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.876049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.881 [2024-07-15 09:40:00.876390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.881 [2024-07-15 09:40:00.876397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.881 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.876719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.876726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.876884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.876891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.877210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.877218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.877399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.877407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.877650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.877657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.877859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.877867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.878208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.878215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.878526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.878532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.878716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.878723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.879018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.879025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.879302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.879309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.879542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.879548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.879842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.879848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.880158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.880165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.880512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.880519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.880860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.880868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.881205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.881212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.881376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.881383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.881718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.881724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.882035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.882042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.882216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.882223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.882514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.882521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.882837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.882843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.883019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.883027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.883430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.883437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.883632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.882 [2024-07-15 09:40:00.883639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.882 qpair failed and we were unable to recover it. 00:31:13.882 [2024-07-15 09:40:00.883691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.883697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.883 [2024-07-15 09:40:00.884023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.884030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.883 [2024-07-15 09:40:00.884356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.884362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.883 [2024-07-15 09:40:00.884541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.884548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.883 [2024-07-15 09:40:00.884921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.884928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.883 [2024-07-15 09:40:00.885287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.885293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.883 [2024-07-15 09:40:00.885596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.885603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.883 [2024-07-15 09:40:00.885928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.885935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.883 [2024-07-15 09:40:00.886105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.886112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.883 [2024-07-15 09:40:00.886414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.886420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.883 [2024-07-15 09:40:00.886740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.886746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.883 [2024-07-15 09:40:00.886948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.886956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.883 [2024-07-15 09:40:00.887142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.887151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.883 [2024-07-15 09:40:00.887453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.887461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.883 [2024-07-15 09:40:00.887640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.887648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.883 [2024-07-15 09:40:00.887982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.887990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.883 [2024-07-15 09:40:00.888286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.888292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.883 [2024-07-15 09:40:00.888590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.888596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.883 [2024-07-15 09:40:00.888743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.888750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.883 [2024-07-15 09:40:00.889123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.883 [2024-07-15 09:40:00.889129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.883 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.889357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.889363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.889551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.889558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.889856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.889863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.890196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.890202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.890592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.890599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.890765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.890779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.891083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.891090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.891271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.891278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.891325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.891335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.891712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.891718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.892028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.892035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.892345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.892351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.892658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.892665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.892866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.892873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.893232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.893238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.893386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.893393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.893632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.893638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.893790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.893798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.894105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.894111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.894157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.894162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.894332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.894339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.894672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.894678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.894847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.894853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.895153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.895160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.895352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.895359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.895525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.895535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.884 [2024-07-15 09:40:00.895888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.884 [2024-07-15 09:40:00.895896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.884 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.896211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.896219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.896545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.896551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.896847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.896853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.897163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.897169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.897469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.897477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.897782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.897790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.897963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.897970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.898172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.898179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.898347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.898354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.898710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.898716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.898912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.898920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.899261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.899268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.899455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.899462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.899777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.899784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.899978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.899985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.900047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.900054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.900236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.900242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.900573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.900579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.900887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.900894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.901226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.901232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.901599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.901605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.901905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.901911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.902230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.902237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.902402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.902409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.902690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.902696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.903016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.903024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.885 qpair failed and we were unable to recover it. 00:31:13.885 [2024-07-15 09:40:00.903063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.885 [2024-07-15 09:40:00.903071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.903393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.903399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.903705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.903711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.903919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.903925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.904152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.904159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.904526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.904533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.904805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.904812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.905016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.905023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.905386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.905393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.905588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.905595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.905897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.905904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.906226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.906234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.906540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.906546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.906842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.906849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.907150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.907156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.907422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.907428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.907760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.907768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.908065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.908072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.908377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.908384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.908708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.908717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.909045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.909052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.909357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.909364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.909647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.909653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.909982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.909989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.910282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.910289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.910618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.910625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.886 [2024-07-15 09:40:00.910842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.886 [2024-07-15 09:40:00.910849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.886 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.911142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.911149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.911470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.911477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.911780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.911787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.912071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.912078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.912415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.912422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.912598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.912605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.912885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.912892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.913081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.913088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.913467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.913474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.913518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.913524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.913851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.913857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.914013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.914019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.914268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.914274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.914586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.914594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.914915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.914922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.915147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.915154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.915353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.915359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.915528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.915535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.915829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.915836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.916177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.916183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.916350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.916357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.916534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.916541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.916860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.916867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.917192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.917199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.917500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.917506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.917707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.887 [2024-07-15 09:40:00.917713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.887 qpair failed and we were unable to recover it. 00:31:13.887 [2024-07-15 09:40:00.918040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.918047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.918222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.918228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.918590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.918597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.918915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.918923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.919247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.919254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.919471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.919478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.919768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.919777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.919972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.919978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.920192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.920198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.920362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.920367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.920679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.920685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.920869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.920875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.921095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.921102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.921338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.921345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.921651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.921658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.921833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.921841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.922022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.922028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.922356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.922362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.922706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.922713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.923038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.923045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.923432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.923438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.923781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.923788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.924098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.924105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.924255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.924262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.924322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.924328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.924644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.924650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.925020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.925027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.888 [2024-07-15 09:40:00.925223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.888 [2024-07-15 09:40:00.925231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.888 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.925420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.925426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.925620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.925627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.925839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.925847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.926077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.926083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.926310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.926316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.926689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.926696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.926737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.926743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.927072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.927079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.927373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.927379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.927562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.927569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.927707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.927714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.927904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.927912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.928169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.928176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.928351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.928359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.928662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.928668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.928987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.928993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.929169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.929176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.929336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.929342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.929514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.929522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.929809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.889 [2024-07-15 09:40:00.929816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.889 qpair failed and we were unable to recover it. 00:31:13.889 [2024-07-15 09:40:00.929992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.929999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.930295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.930302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.930640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.930646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.930981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.930987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.931284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.931291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.931686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.931692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.932084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.932091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.932297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.932304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.932649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.932655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.932944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.932950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.933148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.933155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.933505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.933512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.933715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.933721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.934003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.934011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.934187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.934193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.934364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.934370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.934615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.934621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.934846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.934853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.935144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.935151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.935321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.935327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.935563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.935570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.935882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.935889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.936060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.936067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.936370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.936376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.936680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.936686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.937004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.937011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.937176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.937182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.937469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.937476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.937784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.937791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.937975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.937983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.890 [2024-07-15 09:40:00.938171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.890 [2024-07-15 09:40:00.938177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.890 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.938356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.938363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.938539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.938546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.938709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.938715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.938920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.938927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.939261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.939267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.939576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.939582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.939885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.939892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.940058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.940067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.940310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.940316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.940470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.940477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.940825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.940832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.941154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.941161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.941360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.941367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.941567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.941574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.941924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.941932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.942243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.942249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.942562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.942569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.942745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.942755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.943097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.943103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.943279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.943286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.943582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.943588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.943897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.943904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.944097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.944104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.944272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.944278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.944601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.944608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.944926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.944932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.945239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.945246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.945391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.945397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.945685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.945691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.946024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.946031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.946201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.946207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.946444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.946451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.946672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.946678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.946855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.946862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.947205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.947212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.891 [2024-07-15 09:40:00.947577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.891 [2024-07-15 09:40:00.947584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.891 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.947846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.947853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.948037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.948045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.948428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.948435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.948759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.948766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.949102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.949108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.949308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.949314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.949666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.949672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.949889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.949897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.950215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.950221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.950435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.950442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.950688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.950694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.950906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.950914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.951196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.951202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.951512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.951519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.951837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.951844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.952173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.952179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.952493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.952500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.952684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.952691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.952987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.952995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.953312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.953318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.953630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.953637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.953813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.953820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.954183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.954190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.954538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.954545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.954854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.954860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.955096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.955103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.955421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.955427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.955591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.955597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.955789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.955796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.956154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.956161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.956470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.956476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.956736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.956743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.957023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.957030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.957217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.957223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.957397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.957403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.957692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.957698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.958008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.958015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.958315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.958322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.958624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.958631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.958932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.892 [2024-07-15 09:40:00.958938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.892 qpair failed and we were unable to recover it. 00:31:13.892 [2024-07-15 09:40:00.959237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.959244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.959577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.959583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.959759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.959767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.960064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.960071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.960393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.960400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.960669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.960676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.961006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.961013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.961178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.961184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.961528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.961534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.961838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.961845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.961881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.961887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.962175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.962183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.962493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.962499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.962802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.962809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.962847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.962853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.963227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.963233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.963412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.963419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.963750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.963759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.964089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.964095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.964414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.964420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.964763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.964770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.965154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.965161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.965289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.965295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.965556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.965562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.965603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.965609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.965959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.965966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.966264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.966270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.966612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.966620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.966969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.966976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.967093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.967100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.967372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.967379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.967712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.967718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.967921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.967928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.968266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.968272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.968597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.968604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.968926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.968933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.969102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.969109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.969402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.969408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.969743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.969750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.969973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.969979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.970205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.893 [2024-07-15 09:40:00.970211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.893 qpair failed and we were unable to recover it. 00:31:13.893 [2024-07-15 09:40:00.970540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.970546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.970856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.970862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.971175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.971182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.971606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.971614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.971911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.971918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.972130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.972136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.972327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.972334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.972629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.972635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.972833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.972840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.973213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.973220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.973529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.973538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.973927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.973935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.974259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.974266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.974540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.974546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.974746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.974756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.974992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.975000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.975181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.975189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.975484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.975490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.975679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.975685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.975897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.975904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.976221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.976227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.976601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.976607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.976908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.976915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.977234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.977240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.977388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.977395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.977737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.977744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.977903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.977910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.978171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.978178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.978488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.978495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.978823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.978830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.979146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.979152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.979460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.979467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.894 [2024-07-15 09:40:00.979776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-15 09:40:00.979783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.894 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.980110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.980116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.980417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.980423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.980622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.980629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.980825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.980832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.981141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.981148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.981356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.981362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.981543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.981550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.981872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.981878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.982215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.982221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.982557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.982563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.982863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.982869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.983031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.983037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.983429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.983435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.983627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.983635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.983837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.983843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.984185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.984191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.984474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.984481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.984523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.984531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.984963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.984969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.985271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.985277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.985489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.985497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.985682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.985688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.985935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.985943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.986317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.986323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.986620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.986627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.986963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.986969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.987179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.987185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.987484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.987491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.987688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.987695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.988030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.988038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.988197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.988203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.988454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.988460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.988665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.988672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.988908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.988915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.989233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.989239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.989398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.989404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.989650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.989656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.989817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.989825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.990177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.990184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.990496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.990503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.895 qpair failed and we were unable to recover it. 00:31:13.895 [2024-07-15 09:40:00.990662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.895 [2024-07-15 09:40:00.990668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.991072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.991079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.991490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.991496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.991808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.991814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.992125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.992133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.992418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.992426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.992730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.992737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.992912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.992919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.993188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.993195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.993527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.993533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.993710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.993720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.994016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.994022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.994328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.994334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.994501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.994508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.994782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.994790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.995006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.995013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.995429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.995435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.995614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.995623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.995904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.995910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.996230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.996236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.996550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.996556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.996875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.996882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.996925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.996930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.997268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.997274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.997598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.997604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.997961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.997968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.998277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.998283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.998607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.998614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.998922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.998929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.999296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.999304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:00.999668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:00.999675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:01.000020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:01.000027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:01.000187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:01.000194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:01.000440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:01.000447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:01.000777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:01.000784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:01.001111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:01.001117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:01.001438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:01.001444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:01.001755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:01.001762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:01.002080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:01.002086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:01.002418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:01.002426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:01.002768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.896 [2024-07-15 09:40:01.002775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.896 qpair failed and we were unable to recover it. 00:31:13.896 [2024-07-15 09:40:01.002937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.002943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.003239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.003246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.003419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.003425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.003813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.003820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.004147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.004154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.004463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.004469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.004777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.004783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.005161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.005168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.005371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.005378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.005574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.005580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.005783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.005790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.006116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.006123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.006430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.006436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.006606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.006612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.006892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.006899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.007210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.007217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.007516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.007524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.007817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.007824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.008160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.008167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.008511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.008518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.008861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.008868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.009182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.009190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.009510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.009517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.009692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.009699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.010008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.010015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.010235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.010241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.010416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.010423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.010465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.010471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.010809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.010816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.011123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.011129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.011424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.011430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.011592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.011599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.011930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.011937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.012251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.012257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.012599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.012605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.012906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.012914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.013201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.013208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.013573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.013580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.013936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.013943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.014251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.014258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.897 qpair failed and we were unable to recover it. 00:31:13.897 [2024-07-15 09:40:01.014584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.897 [2024-07-15 09:40:01.014591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.014762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.014770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.015071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.015077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.015319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.015325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.015542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.015549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.015892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.015899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.016199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.016206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.016530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.016537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.016839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.016846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.017023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.017030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.017373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.017380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.017558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.017565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.017903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.017910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.018227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.018234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.018547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.018554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.018749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.018761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.019108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.019114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.019415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.019422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.019603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.019610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.019892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.019899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.020058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.020064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.020370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.020377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.020650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.020656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.020837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.020843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.021021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.021028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.021323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.021330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.021633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.021640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.021834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.021841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.022202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.022208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.022343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.022350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.022757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.022764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.022909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.022915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.023302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.023308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.023616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.023623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.023949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-07-15 09:40:01.023956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-07-15 09:40:01.024286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.024292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.024608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.024614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.024922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.024929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.025258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.025264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.025571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.025579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.025892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.025899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.026071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.026078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.026434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.026441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.026746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.026757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.027139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.027146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.027322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.027329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.027615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.027622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.027923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.027929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.028263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.028269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.028446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.028453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.028739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.028745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.029059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.029066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.029231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.029238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.029631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.029637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.029840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.029852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.030193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.030200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.030496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.030503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.030626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.030634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.030960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.030967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.031284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.031290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.031490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.031497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.031835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.031842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.032035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.032042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.032387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.032394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.032704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.032711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.032914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.032922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.033131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.033137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.033456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.033463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.033878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.033885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.034256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.034262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.034439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.034446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.034739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.034745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.034895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.034902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.035293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.035300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.035614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.035620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-07-15 09:40:01.035920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-07-15 09:40:01.035927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.036101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.036108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.036504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.036510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.036686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.036693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.036983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.036990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.037229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.037235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.037426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.037432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.037766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.037773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.038111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.038121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.038300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.038307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.038465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.038471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.038784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.038797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.039124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.039131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.039308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.039315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.039612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.039618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.039926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.039932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.040258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.040264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.040438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.040446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.040741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.040748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.041062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.041068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.041282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.041289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.041642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.041648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.041938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.041944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.042143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.042149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.042524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.042531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.042904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.042911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.043230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.043237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.043408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.043414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.043785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.043792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.044165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.044172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.044477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.044484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.044705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.044711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.044880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.044886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.045044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.045055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.045304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.045311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.045628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.045634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.046019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.046026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.046249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.046256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.046582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.046589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.046891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.046898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.047282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-07-15 09:40:01.047290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-07-15 09:40:01.047600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-07-15 09:40:01.047606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-07-15 09:40:01.047820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-07-15 09:40:01.047827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-07-15 09:40:01.047990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-07-15 09:40:01.047996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-07-15 09:40:01.048268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-07-15 09:40:01.048275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-07-15 09:40:01.048579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-07-15 09:40:01.048586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-07-15 09:40:01.048892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-07-15 09:40:01.048899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-07-15 09:40:01.049221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-07-15 09:40:01.049227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-07-15 09:40:01.049520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-07-15 09:40:01.049528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:14.182 [2024-07-15 09:40:01.049883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.182 [2024-07-15 09:40:01.049892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.182 qpair failed and we were unable to recover it. 00:31:14.182 [2024-07-15 09:40:01.050106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.182 [2024-07-15 09:40:01.050114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.182 qpair failed and we were unable to recover it. 00:31:14.182 [2024-07-15 09:40:01.050333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.182 [2024-07-15 09:40:01.050339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.182 qpair failed and we were unable to recover it. 00:31:14.182 [2024-07-15 09:40:01.050722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.182 [2024-07-15 09:40:01.050729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.182 qpair failed and we were unable to recover it. 00:31:14.182 [2024-07-15 09:40:01.051085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.182 [2024-07-15 09:40:01.051092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.182 qpair failed and we were unable to recover it. 00:31:14.182 [2024-07-15 09:40:01.051341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.182 [2024-07-15 09:40:01.051348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.182 qpair failed and we were unable to recover it. 00:31:14.182 [2024-07-15 09:40:01.051664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.182 [2024-07-15 09:40:01.051671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.182 qpair failed and we were unable to recover it. 00:31:14.182 [2024-07-15 09:40:01.051886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.182 [2024-07-15 09:40:01.051893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.182 qpair failed and we were unable to recover it. 00:31:14.182 [2024-07-15 09:40:01.052201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.182 [2024-07-15 09:40:01.052208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.182 qpair failed and we were unable to recover it. 00:31:14.182 [2024-07-15 09:40:01.052521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.182 [2024-07-15 09:40:01.052528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.182 qpair failed and we were unable to recover it. 00:31:14.182 [2024-07-15 09:40:01.052727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.182 [2024-07-15 09:40:01.052734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.182 qpair failed and we were unable to recover it. 00:31:14.182 [2024-07-15 09:40:01.053088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.182 [2024-07-15 09:40:01.053096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.182 qpair failed and we were unable to recover it. 00:31:14.182 [2024-07-15 09:40:01.053283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.182 [2024-07-15 09:40:01.053291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.182 qpair failed and we were unable to recover it. 00:31:14.182 [2024-07-15 09:40:01.053492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.182 [2024-07-15 09:40:01.053499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.182 qpair failed and we were unable to recover it. 00:31:14.182 [2024-07-15 09:40:01.053687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.182 [2024-07-15 09:40:01.053693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.182 qpair failed and we were unable to recover it. 00:31:14.182 [2024-07-15 09:40:01.053986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.053992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.054152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.054158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.054504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.054511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.054552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.054559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.054864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.054871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.055046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.055053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.055253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.055260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.055414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.055420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.055790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.055797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.056096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.056103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.056303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.056311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.056616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.056623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.056944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.056951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.057275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.057281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.057616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.057623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.057947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.057953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.058289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.058295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.058514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.058520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.059005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.059013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.059190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.059198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.059532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.059539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.059844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.059851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.060191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.060198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.060510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.060516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.060695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.060703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.060742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.060760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.060968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.060975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.061279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.061286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.061495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.061502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.061785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.061792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.062009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.062016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.062352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.062359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.062542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.062549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.062866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.062873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.063205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.063212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.063419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.063426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.063719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.063725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.064045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.064052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.064355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.064361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.064695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.183 [2024-07-15 09:40:01.064701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.183 qpair failed and we were unable to recover it. 00:31:14.183 [2024-07-15 09:40:01.065031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.065037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.065350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.065358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.065657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.065664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.065825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.065831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.066122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.066129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.066418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.066424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.066722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.066729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.067113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.067120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.067418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.067425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.067576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.067583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.067974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.067981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.068159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.068166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.068329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.068336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.068629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.068635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.068946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.068952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.069281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.069288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.069465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.069472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.069656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.069662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.069988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.069995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.070312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.070318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.070627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.070633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.070836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.070843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.071020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.071026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.071073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.071079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.071394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.071401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.071705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.071712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.072055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.072062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.072354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.072361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.072683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.072690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.073008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.073016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.073331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.073337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.073643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.073650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.074016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.074023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.074333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.074340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.074702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.074708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.184 [2024-07-15 09:40:01.074909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.184 [2024-07-15 09:40:01.074917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.184 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.075107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.075113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.075437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.075443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.075758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.075766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.075808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.075814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.076177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.076183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.076358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.076365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.076667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.076673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.076970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.076977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.077200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.077207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.077533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.077540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.077917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.077924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.078117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.078124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.078206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.078214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.078377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.078384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.078700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.078706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.078939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.078946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.079274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.079280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.079614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.079621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.079932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.079939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.080118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.080124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.080457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.080463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.080767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.080774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.081168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.081175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.081528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.081535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.081687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.081694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.081996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.082003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.082301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.082308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.082451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.082458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.082843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.082851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.083152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.083158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.083472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.083479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.083781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.083787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.084110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.084117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.084311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.084318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.084624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.084630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.084960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.084967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.085291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.085297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.085703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.085709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.085870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.085877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.185 [2024-07-15 09:40:01.086112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.185 [2024-07-15 09:40:01.086120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.185 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.086417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.086423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.086595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.086602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.086837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.086844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.087060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.087068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.087316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.087322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.087580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.087586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.087915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.087922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.088092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.088098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.088292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.088298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.088517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.088524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.088570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.088577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.088744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.088755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.089064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.089071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.089376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.089383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.089681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.089687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.089996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.090003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.090303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.090310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.090620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.090626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.091015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.091023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.091187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.091194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.091572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.091579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.091722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.091729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.092056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.092064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.092370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.092377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.092541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.092547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.092729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.092736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.093047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.093054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.093358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.093364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.093658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.093667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.093711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.093717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.093927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.093934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.094281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.094287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.094586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.094593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.094940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.094947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.095122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.095129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.186 [2024-07-15 09:40:01.095503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.186 [2024-07-15 09:40:01.095510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.186 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.095723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.095729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.096055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.096062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.096214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.096221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.096505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.096511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.096683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.096689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.097049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.097056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.097447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.097453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.097657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.097666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.098010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.098017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.098323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.098329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.098642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.098648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.098965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.098972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.099196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.099203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.099485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.099491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.099790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.099797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.100105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.100111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.100283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.100290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.100686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.100693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.101095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.101102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.101491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.101498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.101796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.101803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.101982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.101989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.102064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.102070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.102364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.102371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.102556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.102564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.102880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.102886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.103048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.103054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.103093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.103108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.187 [2024-07-15 09:40:01.103421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.187 [2024-07-15 09:40:01.103428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.187 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.103726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.103732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.103908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.103915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.104081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.104088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.104135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.104142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.104377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.104383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.104706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.104712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.105036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.105043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.105344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.105350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.105517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.105524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.105894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.105901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.106196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.106203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.106518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.106524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.106829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.106837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.107012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.107018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.107335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.107341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.107638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.107644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.107973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.107980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.108288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.108295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.108607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.108614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.108916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.108923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.109107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.109114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.109285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.109291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.109635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.109642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.109822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.109830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.110018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.110024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.110213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.110220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.110473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.110480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.110798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.110806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.110976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.110984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.111281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.111288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.111530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.111537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.111881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.111888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.112053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.112059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.112353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.112360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.112657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.112664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.188 [2024-07-15 09:40:01.112982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.188 [2024-07-15 09:40:01.112990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.188 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.113308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.113315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.113660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.113667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.113973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.113979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.114305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.114312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.114491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.114498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.114791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.114798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.115106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.115113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.115438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.115446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.115611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.115617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.115967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.115974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.116302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.116308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.116490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.116497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.116762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.116769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.117101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.117107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.117406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.117413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.117717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.117723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.118051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.118057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.118395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.118401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.118733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.118739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.118916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.118924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.119134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.119140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.119360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.119366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.119701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.119708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.120068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.120075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.120382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.120388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.120730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.120736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.121066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.121072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.121296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.121303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.121477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.121483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.121758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.121766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.122069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.122076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.122263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.122269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.122579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.122586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.122809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.122815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.123018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.123024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.123313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.123319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.123495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.189 [2024-07-15 09:40:01.123502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.189 qpair failed and we were unable to recover it. 00:31:14.189 [2024-07-15 09:40:01.123546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.123553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.123859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.123866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.124187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.124193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.124372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.124380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.124673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.124679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.125005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.125012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.125327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.125333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.125505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.125512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.125801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.125808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.126225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.126232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.126538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.126545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.126850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.126857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.127057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.127064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.127258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.127264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.127568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.127574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.127895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.127902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.128232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.128239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.128470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.128477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.128683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.128690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.129061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.129067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.129365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.129372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.129669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.129676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.130048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.130055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.130235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.130243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.130487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.130494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.130798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.130805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.131110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.131116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.131326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.131332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.131694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.131700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.131876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.131884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.132183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.132190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.132376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.132383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.132581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.132589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.132864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.132871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.133206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.133212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.133522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.133528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.133828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.133835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.134047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.134060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.134225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.134232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.134542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.134548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.134858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.190 [2024-07-15 09:40:01.134866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.190 qpair failed and we were unable to recover it. 00:31:14.190 [2024-07-15 09:40:01.135076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.135082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.135375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.135382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.135703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.135709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.136015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.136022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.136326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.136333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.136725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.136732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.136946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.136952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.137281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.137288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.137597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.137603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.137769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.137777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.138105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.138111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.138304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.138311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.138654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.138662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.138850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.138857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.139160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.139167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.139485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.139492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.139821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.139828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.140147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.140153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.140473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.140479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.140780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.140787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.141138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.141144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.141336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.141344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.141380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.141388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.141747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.141761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.141929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.141936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.142211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.142218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.142505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.142511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.142870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.142877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.143155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.143162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.143199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.143204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.143363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.143370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.143561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.143567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.143920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.143927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.191 qpair failed and we were unable to recover it. 00:31:14.191 [2024-07-15 09:40:01.144277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.191 [2024-07-15 09:40:01.144283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.144488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.144495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.144684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.144690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.144914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.144921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.145005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.145012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.145317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.145323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.145509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.145516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.145905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.145912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.146236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.146242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.146589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.146595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.146634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.146641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.146826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.146833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.147184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.147191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.147533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.147540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.147861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.147867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.148050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.148057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.148345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.148355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.148683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.148690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.148985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.148991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.149167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.149174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.149460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.149467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.149802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.149809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.149986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.149993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.150063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.150071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.150375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.150382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.150688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.150694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.150975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.150981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.151296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.151302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.151627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.151634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.151800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.151807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.152094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.152100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.152400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.152407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.152717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.152723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.153031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.153038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.153371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.153378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.153709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.153717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.154070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.154077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.154266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.154273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.154481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.154487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.155066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.155158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.192 [2024-07-15 09:40:01.155620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.192 [2024-07-15 09:40:01.155655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:14.192 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.156090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.156174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b60000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.156547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.156556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.157068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.157095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.157417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.157425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.157577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.157583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.157957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.157964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.158182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.158189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.158515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.158522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.158824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.158831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.158998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.159006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.159243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.159249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.159409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.159415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.159693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.159699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.159902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.159909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.160237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.160244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.160421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.160427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.160683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.160690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.160872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.160878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.161073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.161080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.161292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.161299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.161368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.161374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.161705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.161711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.162052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.162059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.162237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.162244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.162486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.162492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.162805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.162813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.162979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.162985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.163164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.163170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.163501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.163508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.163861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.163869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.164194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.164202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.164244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.164251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.164568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.164574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.164747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.164766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.164850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.164857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.165185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.165192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.165582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.165588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.165767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.165775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.166065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.166071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.166263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.166271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.193 [2024-07-15 09:40:01.166482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.193 [2024-07-15 09:40:01.166488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.193 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.166949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.166956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.166999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.167007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.167174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.167180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.167464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.167470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.167793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.167800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.168131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.168138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.168430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.168437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.168733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.168741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.169064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.169072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.169393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.169400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.169573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.169580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.169796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.169803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.170018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.170027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.170355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.170361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.170540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.170547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.170832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.170840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.171057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.171064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.171394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.171400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.171598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.171604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.171903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.171910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.172084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.172091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.172453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.172459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.172664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.172672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.172862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.172869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.173198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.173205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.173579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.173585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.173911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.173917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.174096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.174103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.174499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.174505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.174805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.174811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.175142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.175149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.175457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.175464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.175640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.175647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.175939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.175946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.176287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.176293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.176585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.176592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.176891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.176897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.177080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.177088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.177305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.177312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.177597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.194 [2024-07-15 09:40:01.177603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.194 qpair failed and we were unable to recover it. 00:31:14.194 [2024-07-15 09:40:01.177785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.177792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.178073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.178080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.178484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.178490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.178787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.178794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.179102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.179109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.179410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.179417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.179721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.179727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.179955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.179963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.180242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.180249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.180583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.180589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.180889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.180896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.181204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.181212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.181521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.181528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.181704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.181711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.181993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.181999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.182311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.182317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.182541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.182548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.182720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.182726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.183025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.183032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.183347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.183354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.183592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.183600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.183928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.183935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.183978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.183985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.184307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.184313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.184617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.184624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.184770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.184777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.185020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.185027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.185303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.185309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.185480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.185487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.185678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.185685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.185971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.185978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.186310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.186316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.186609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.186617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.186833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.186840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.187160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.187166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.187376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.187384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.187697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.187704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.187921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.195 [2024-07-15 09:40:01.187928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.195 qpair failed and we were unable to recover it. 00:31:14.195 [2024-07-15 09:40:01.188273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.188280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.188581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.188587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.188750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.188761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.189135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.189143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.189464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.189471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.189783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.189790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.190132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.190139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.190295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.190302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.190697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.190704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.191094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.191101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.191357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.191364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.191507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.191514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.191820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.191827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.192162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.192169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.192344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.192350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.192641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.192647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.192830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.192837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.193039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.193046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.193228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.193235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.193471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.193478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.193791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.193798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.194123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.194130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.194309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.194316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.194630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.194637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.194842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.194849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.195110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.195116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.195430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.195437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.195745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.195755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.195922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.195929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.196279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.196286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.196409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.196415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.196561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.196 [2024-07-15 09:40:01.196568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.196 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-15 09:40:01.196739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.196746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.197055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.197062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.197448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.197454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.197760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.197768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.198074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.198081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.198423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.198430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.198785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.198792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.198977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.198984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.199147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.199153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.199448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.199454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.199762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.199769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.200061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.200069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.200275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.200282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.200469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.200475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.200862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.200870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.201106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.201113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.201441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.201448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.201763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.201770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.202069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.202075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.202383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.202390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.202705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.202712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.202939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.202946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.203271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.203278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.203459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.203466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.203744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.203753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.204166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.204173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.204490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.204496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.204710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.204717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.204934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.204941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.205254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.205260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.205588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.205594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.205802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.205815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.206157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.206163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.206462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.206468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.206610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.206617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.206802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.206808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-15 09:40:01.207113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.197 [2024-07-15 09:40:01.207120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.207470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.207477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.207675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.207683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.208018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.208025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.208199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.208205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.208591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.208597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.208897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.208904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.209220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.209227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.209534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.209541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.209873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.209880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.210203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.210209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.210530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.210536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.210706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.210712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.211001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.211008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.211188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.211196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.211494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.211503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.211808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.211815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.212138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.212144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.212441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.212447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.212759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.212765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.213052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.213058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.213373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.213380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.213698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.213704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.214034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.214040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.214250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.214257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.214587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.214593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.214865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.214871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.214940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.214946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.215107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.215114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.215364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.215371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.215573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.215580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.215832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.215839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.216193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.216200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.216504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.216511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.216678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.216685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.217041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.217048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.217275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.217281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.217594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.217600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.217810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.217816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.218191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.218198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.218437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.198 [2024-07-15 09:40:01.218444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-15 09:40:01.218792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.218799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.218992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.218998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.219296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.219302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.219338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.219344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.219726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.219732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.220039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.220046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.220292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.220299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.220624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.220631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.220933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.220939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.221161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.221168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.221526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.221533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.221844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.221851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.222188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.222195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.222513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.222520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.222701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.222709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.223073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.223081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.223122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.223128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.223460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.223467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.223769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.223776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.224094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.224101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.224410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.224418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.224598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.224605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.224904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.224912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.225232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.225239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.225556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.225563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.225878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.225885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.225928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.225934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.226254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.226260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.226435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.226443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.226726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.226733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.227035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.227042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.227083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.227089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.227251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.227257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.227583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.227589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.227864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.227871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.228189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.228195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.228530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.228537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.228720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.228727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.229070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.229077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.229270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.229278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.229617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.199 [2024-07-15 09:40:01.229624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.199 qpair failed and we were unable to recover it. 00:31:14.199 [2024-07-15 09:40:01.229781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.229788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.230153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.230159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.230308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.230315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.230464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.230470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.230768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.230775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.231060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.231067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.231373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.231381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.231558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.231566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.231606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.231613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.231813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.231820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.232166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.232173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.232438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.232445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.232736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.232742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.232943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.232952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.233175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.233182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.233460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.233467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.233639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.233645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.233834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.233840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.234183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.234190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.234335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.234341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.234673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.234681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.235002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.235009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.235330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.235336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.235516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.235523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.235867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.235874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.236051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.236058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.236355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.236361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.236709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.236716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.237030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.237037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.237341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.237347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.237490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.237497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.237784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.237791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.238028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.238035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.238364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.200 [2024-07-15 09:40:01.238370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.200 qpair failed and we were unable to recover it. 00:31:14.200 [2024-07-15 09:40:01.238672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.238679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.239062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.239069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.239112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.239118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.239303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.239310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.239647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.239654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.239984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.239991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.240287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.240294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.240597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.240603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.240931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.240939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.241244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.241252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.241599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.241606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.241912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.241919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.242241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.242248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.242555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.242562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.242890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.242897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.243211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.243217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.243398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.243405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.243645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.243651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.243853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.243860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.244154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.244163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.244447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.244453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.244812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.244819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.245005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.245013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.245316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.245323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.245639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.245645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.245998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.246005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.246309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.246315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.246512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.246519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.246877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.246885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.247216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.247223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.247402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.247410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.247718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.247724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.247884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.247891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.248128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.248135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.248406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.248412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.248764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.248771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.249114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.249121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.249416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.249424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.249762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.249769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.250097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.201 [2024-07-15 09:40:01.250103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.201 qpair failed and we were unable to recover it. 00:31:14.201 [2024-07-15 09:40:01.250419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.250426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.250737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.250744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.251072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.251079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.251410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.251418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.251774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.251781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.252102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.252109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.252414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.252420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.252572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.252579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.252976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.252983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.253289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.253296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.253603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.253610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.253909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.253916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.254274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.254281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.254457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.254470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.254554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.254560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.254861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.254868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.255183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.255190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.255375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.255382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.255777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.255784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.256105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.256114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.256414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.256420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.256730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.256736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.257056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.257063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.257229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.257236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.257581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.257588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.257932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.257940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.258284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.258291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.258366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.258374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.258572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.258578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.258882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.258889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.259114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.259121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.259288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.259296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.259618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.259625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.259932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.259940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.260137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.260144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.260437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.260444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.260765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.260773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.260961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.260970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.261282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.261288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.261593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.261600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.202 qpair failed and we were unable to recover it. 00:31:14.202 [2024-07-15 09:40:01.261754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.202 [2024-07-15 09:40:01.261761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.262145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.262153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.262462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.262470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.262748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.262759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.263078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.263084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.263398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.263404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.263687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.263693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.263876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.263882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.264174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.264181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.264385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.264393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.264757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.264764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.265122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.265130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.265454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.265461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.265771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.265778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.266030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.266036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.266376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.266383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.266726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.266732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.266909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.266916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.267308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.267316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.267520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.267530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.267960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.267967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.268203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.268210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.268495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.268503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.268846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.268854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.269126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.269133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.269459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.269466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.269637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.269643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.270008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.270015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.270322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.270328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.270551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.270559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.270896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.270903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.271221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.271228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.271408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.271415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.271700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.271706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.271895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.271902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.272240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.272246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.272441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.272449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.272812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.272820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.273030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.273037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.273402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.273410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.273632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.203 [2024-07-15 09:40:01.273639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.203 qpair failed and we were unable to recover it. 00:31:14.203 [2024-07-15 09:40:01.273967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.273974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.274148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.274155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.274457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.274464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.274778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.274785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.274981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.274988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.275273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.275280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.275578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.275585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.275762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.275769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.276051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.276058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.276296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.276302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.276599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.276606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.276919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.276926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.277268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.277275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.277483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.277489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.277779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.277787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.278111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.278118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.278468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.278475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.278644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.278651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.278696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.278705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.278896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.278904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.279238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.279244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.279669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.279675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.279850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.279858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.280193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.280200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.280524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.280531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.280763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.280770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.280960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.280967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.281264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.281271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.281442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.281449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.281889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.281896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.282092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.282099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.282349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.282356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.282654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.282661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.282859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.282866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.283032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.283039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.204 qpair failed and we were unable to recover it. 00:31:14.204 [2024-07-15 09:40:01.283215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.204 [2024-07-15 09:40:01.283222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.283512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.283519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.283840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.283848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.284013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.284020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.284306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.284313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.284631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.284638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.284825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.284840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.285230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.285237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.285542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.285550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.285617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.285624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.290182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.290210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.290540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.290548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.290981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.291009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.291239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.291248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.291579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.291586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.291766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.291774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.292031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.292038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.292358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.292366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.292755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.292763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.293062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.293069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.293394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.293401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.293700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.293707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.294037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.294045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.294282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.294293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.294600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.294607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.294782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.294790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.295000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.295007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.295347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.295354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.295695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.295701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.295886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.295895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.296114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.296121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.296292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.296299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.296689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.296696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.296868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.296876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.297044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.297051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.297362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.297369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.297531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.297538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.297927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.297934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.298128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.298136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.298337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.298344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.205 [2024-07-15 09:40:01.298497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.205 [2024-07-15 09:40:01.298505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.205 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.298928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.298937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.299327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.299334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.299639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.299645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.299792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.299799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.300105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.300111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.300446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.300453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.300656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.300663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.300972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.300978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.301178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.301186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.301487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.301494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.301816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.301823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.302223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.302230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.302399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.302406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.302628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.302634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.302813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.302820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.303098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.303104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.303335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.303341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.303662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.303668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.303875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.303881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.304107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.304113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.304395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.304401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.304572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.304578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.304852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.304860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.305042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.305048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.305443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.305450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.305627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.305634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.305922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.305929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.305967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.305973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.306384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.306390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.306615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.306623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.306930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.306936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.307111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.307117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.307474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.307481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.307720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.307726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.307949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.307957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.308318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.308324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.308624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.308631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.309018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.309024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.309344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.309351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.309587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.309593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.206 qpair failed and we were unable to recover it. 00:31:14.206 [2024-07-15 09:40:01.309792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.206 [2024-07-15 09:40:01.309799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.310193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.310200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.310508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.310515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.310853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.310859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.311041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.311048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.311346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.311352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.311657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.311663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.311978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.311985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.312209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.312216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.312383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.312390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.312458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.312465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.312776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.312783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.313148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.313154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.313464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.313470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.313794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.313801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.314131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.314138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.314449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.314456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.314795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.314801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.315094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.315100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.315329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.315336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.315621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.315628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.315935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.315943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.316243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.316249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.316583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.316589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.316898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.316905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.317096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.317103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.317272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.317278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.317469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.317476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.317787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.317794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.317977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.317984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.318256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.318262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.318671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.318678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.318717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.318723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.318893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.318900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.319190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.319197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.319529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.319536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.319867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.319875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.320099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.320106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.320444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.320451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.320651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.320658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.320991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.207 [2024-07-15 09:40:01.320997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.207 qpair failed and we were unable to recover it. 00:31:14.207 [2024-07-15 09:40:01.321331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.321337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.321645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.321652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.321830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.321838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.322176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.322182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.322404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.322411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.322859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.322866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.323237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.323244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.323406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.323412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.323785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.323794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.324100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.324107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.324281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.324294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.324625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.324631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.324788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.324795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.324980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.324986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.325330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.325336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.325656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.325662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.325993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.326000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.326315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.326323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.326594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.326600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.326901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.326908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.327246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.327253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.327582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.327589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.327907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.327914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.328093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.328100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.328432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.328438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.328615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.328622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.328788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.328796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.329118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.329125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.329468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.329475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.329691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.329697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.329996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.330003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.330316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.330323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.330523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.330530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.330779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.330786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.331114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.331120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.331445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.331452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.331660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.331667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.331863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.331870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.332056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.332062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.332344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.332351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.332535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.208 [2024-07-15 09:40:01.332541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.208 qpair failed and we were unable to recover it. 00:31:14.208 [2024-07-15 09:40:01.332742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.332749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.332906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.332913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.333114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.333122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.333323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.333329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.333513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.333520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.333814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.333820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.334238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.334245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.334547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.334555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.334889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.334896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:14.209 [2024-07-15 09:40:01.335217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.335226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.335409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.335416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:31:14.209 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:14.209 [2024-07-15 09:40:01.335782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.335790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:14.209 [2024-07-15 09:40:01.336097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.336104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:14.209 [2024-07-15 09:40:01.336299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.336308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.336603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.336609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.336916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.336923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.337166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.337172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.337349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.337355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.337671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.337678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.338000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.338007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.338333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.338340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.338535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.338543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.338881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.338888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.339108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.339117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.339473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.339479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.339680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.339688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.339880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.339887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.340118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.340125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.340288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.340296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.340482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.340491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.340794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.340801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.341103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.341111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.341433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.209 [2024-07-15 09:40:01.341439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.209 qpair failed and we were unable to recover it. 00:31:14.209 [2024-07-15 09:40:01.341738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.341745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.341915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.341922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.342317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.342323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.342632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.342639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.342941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.342948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.343265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.343272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.343443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.343449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.343762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.343770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.343939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.343946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.344269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.344277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.344642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.344649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.344906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.344913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.345259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.345267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.345602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.345609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.345931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.345938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.346257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.346265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.346470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.346478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.346854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.346862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.347180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.347188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.347388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.347395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.347632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.347641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.347835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.347843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.348220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.348226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.348565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.348572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.348979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.348986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.349134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.349141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.349332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.349338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.349545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.349552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.349889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.349897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.350087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.350094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.350264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.350270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.350313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.350319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.350673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.350680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.350882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.350890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.351114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.351121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.351342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.351349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.351555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.351561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.351908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.351915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.352227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.352234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.352554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.352561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.352759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.210 [2024-07-15 09:40:01.352767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.210 qpair failed and we were unable to recover it. 00:31:14.210 [2024-07-15 09:40:01.353094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.353101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.353402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.353408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.353708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.353714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.353758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.353765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.354068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.354075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.354383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.354390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.354696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.354703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.354919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.354926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.355111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.355119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.355283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.355290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.355668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.355674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.355995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.356003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.356389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.356397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.356598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.356606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.356791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.356798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.356981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.356987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.357151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.357157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.357193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.357199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.357414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.357421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.357591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.357597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.357807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.357814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.358148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.358155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.358486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.358493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.358814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.358820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.359138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.359146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.359462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.359470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.359785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.359792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.360143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.360150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.360463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.360469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.360656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.360662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.360961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.360968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.361286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.361293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.361615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.361623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.361834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.361841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.362155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.362161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.362465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.362472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.362771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.362778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.362815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.362821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.363151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.363158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.363471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.363477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.211 [2024-07-15 09:40:01.363679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.211 [2024-07-15 09:40:01.363687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.211 qpair failed and we were unable to recover it. 00:31:14.212 [2024-07-15 09:40:01.364035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.212 [2024-07-15 09:40:01.364042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.212 qpair failed and we were unable to recover it. 00:31:14.212 [2024-07-15 09:40:01.364086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.212 [2024-07-15 09:40:01.364092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.212 qpair failed and we were unable to recover it. 00:31:14.212 [2024-07-15 09:40:01.364394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.212 [2024-07-15 09:40:01.364400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.212 qpair failed and we were unable to recover it. 00:31:14.212 [2024-07-15 09:40:01.364707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.212 [2024-07-15 09:40:01.364713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.212 qpair failed and we were unable to recover it. 00:31:14.212 [2024-07-15 09:40:01.365034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.212 [2024-07-15 09:40:01.365041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.212 qpair failed and we were unable to recover it. 00:31:14.212 [2024-07-15 09:40:01.365342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.212 [2024-07-15 09:40:01.365348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.212 qpair failed and we were unable to recover it. 00:31:14.212 [2024-07-15 09:40:01.365639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.212 [2024-07-15 09:40:01.365646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.212 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.365976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.365984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.366372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.366379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.366677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.366685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.366858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.366867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.367176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.367183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.367381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.367388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.367695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.367702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.368002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.368009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.368324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.368330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.368501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.368507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.368665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.368673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.368875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.368883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.369068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.369075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.369190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.369197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.369523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.369530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.369849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.369858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.370165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.370172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.370524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.370531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.370728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.370736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.371073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.371080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.371255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.371264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.371475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.479 [2024-07-15 09:40:01.371482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.479 qpair failed and we were unable to recover it. 00:31:14.479 [2024-07-15 09:40:01.371759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.371766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.371942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.371949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.372114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.372121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.372339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.372345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.372660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.372667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.372957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.372964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.373172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.373178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.373475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.373482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.480 [2024-07-15 09:40:01.373685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.373693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.373892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.373901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:14.480 [2024-07-15 09:40:01.374214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.374222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.480 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:14.480 [2024-07-15 09:40:01.374529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.374538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.374886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.374894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.375228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.375235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.375413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.375421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.375697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.375703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.376021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.376029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.376331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.376337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.376676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.376682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.377008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.377016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.377321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.377328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.377550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.377557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.377717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.377724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.377904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.377911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.378251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.378258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.378480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.378487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.378808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.378814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.379128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.379135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.379437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.379443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.379623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.379631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.379934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.379941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.380253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.380259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.380482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.380489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.380654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.380662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.380765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.380772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.381069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.381076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.381251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.381259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.480 [2024-07-15 09:40:01.381567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.480 [2024-07-15 09:40:01.381573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.480 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.381907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.381914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.382092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.382099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.382492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.382499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.382814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.382821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.383056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.383063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.383370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.383376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.383708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.383715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.384016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.384023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.384225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.384234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.384588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.384595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.384966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.384973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.385012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.385018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.385348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.385355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.385511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.385519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.385593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.385599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.385908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.385916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.386242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.386249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.386430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.386437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.386744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.386769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.387088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.387094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.387399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.387407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.387748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.387759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.388099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.388106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.388414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.388422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.388587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.388593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.388988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.388995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.389311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.389317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.389617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.389624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.389931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.389939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.390263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.390269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.390468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.390475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.390731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.390738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 Malloc0 00:31:14.481 [2024-07-15 09:40:01.390940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.390946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.481 [2024-07-15 09:40:01.391245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.391251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:14.481 [2024-07-15 09:40:01.391426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.391433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.481 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:14.481 [2024-07-15 09:40:01.391722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.391729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.391864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.391870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.392166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.392173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.481 qpair failed and we were unable to recover it. 00:31:14.481 [2024-07-15 09:40:01.392498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.481 [2024-07-15 09:40:01.392505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.392690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.392696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.392947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.392954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.393185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.393192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.393518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.393524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.393825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.393831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.394002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.394010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.394251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.394257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.394401] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.482 [2024-07-15 09:40:01.394566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.394574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.394898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.394904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.395207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.395214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.395511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.395517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.395813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.395820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.396149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.396157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.396462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.396469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.396634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.396641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.396960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.396967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.397277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.397283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.397591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.397597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.397913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.397919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.398240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.398246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.398431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.398438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.398663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.398670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.398855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.398863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.399194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.399200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.399578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.399584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.399756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.399763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.399806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.399813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.400115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.400121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.400434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.400440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.400757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.400764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.401073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.401080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.401243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.401250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.401530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.401537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.401754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.401760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.401927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.401933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.402236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.402242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.402450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.402458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 [2024-07-15 09:40:01.402826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.402832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.482 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.482 [2024-07-15 09:40:01.403031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.482 [2024-07-15 09:40:01.403038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.482 qpair failed and we were unable to recover it. 00:31:14.483 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:14.483 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.483 [2024-07-15 09:40:01.403273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:14.483 [2024-07-15 09:40:01.403280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.403564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.403570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.403913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.403919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.404265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.404272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.404474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.404481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.404829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.404836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.405127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.405134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.405314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.405320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.405503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.405510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.405733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.405741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.405924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.405932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.406273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.406280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.406544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.406551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.406733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.406740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.407189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.407196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.407493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.407499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.407676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.407683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.407983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.407989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.408287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.408293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.408609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.408615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.408785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.408792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.409084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.409090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.409411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.409418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.409600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.409606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.409890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.409897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.410222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.410228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.410548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.410554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.483 [2024-07-15 09:40:01.410855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.410862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:14.483 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.483 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:14.483 [2024-07-15 09:40:01.411162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.411168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.411530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.411537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.483 qpair failed and we were unable to recover it. 00:31:14.483 [2024-07-15 09:40:01.411684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.483 [2024-07-15 09:40:01.411690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.411972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.411979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.412290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.412297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.412620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.412627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.412712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.412718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.412862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.412869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.413217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.413223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.413390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.413397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.413678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.413685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.413925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.413932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.414241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.414247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.414642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.414648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.414689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.414695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.415093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.415099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.415277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.415284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.415577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.415583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.415909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.415916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.416231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.416238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.416555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.416561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.416736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.416743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.416921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.416927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.417242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.417248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.417447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.417454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.417638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.417645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.417932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.417938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.417976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.417982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.418312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.418319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.418641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.418648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.484 [2024-07-15 09:40:01.418985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.418992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:14.484 [2024-07-15 09:40:01.419035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.419041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.484 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:14.484 [2024-07-15 09:40:01.419342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.419348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.419533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.419540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.419833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.419840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.420016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.420024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.420200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.420206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.420373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.420380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.420714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.420720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.421037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.421044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.421219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.421227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.484 qpair failed and we were unable to recover it. 00:31:14.484 [2024-07-15 09:40:01.421557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.484 [2024-07-15 09:40:01.421563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.485 qpair failed and we were unable to recover it. 00:31:14.485 [2024-07-15 09:40:01.421785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.485 [2024-07-15 09:40:01.421792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.485 qpair failed and we were unable to recover it. 00:31:14.485 [2024-07-15 09:40:01.421944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.485 [2024-07-15 09:40:01.421950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.485 qpair failed and we were unable to recover it. 00:31:14.485 [2024-07-15 09:40:01.422241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.485 [2024-07-15 09:40:01.422247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.485 qpair failed and we were unable to recover it. 00:31:14.485 [2024-07-15 09:40:01.422554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.485 [2024-07-15 09:40:01.422561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8b58000b90 with addr=10.0.0.2, port=4420 00:31:14.485 qpair failed and we were unable to recover it. 00:31:14.485 [2024-07-15 09:40:01.422605] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.485 [2024-07-15 09:40:01.425042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.485 [2024-07-15 09:40:01.425127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.485 [2024-07-15 09:40:01.425141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.485 [2024-07-15 09:40:01.425147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.485 [2024-07-15 09:40:01.425152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.485 [2024-07-15 09:40:01.425166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.485 qpair failed and we were unable to recover it. 00:31:14.485 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.485 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:14.485 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.485 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:14.485 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.485 [2024-07-15 09:40:01.434965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.485 [2024-07-15 09:40:01.435021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.485 09:40:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 902433 00:31:14.485 [2024-07-15 09:40:01.435033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.485 [2024-07-15 09:40:01.435038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.485 [2024-07-15 09:40:01.435042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.485 [2024-07-15 09:40:01.435053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.485 qpair failed and we were unable to recover it. 00:31:14.485 [2024-07-15 09:40:01.445008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.485 [2024-07-15 09:40:01.445067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.485 [2024-07-15 09:40:01.445078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.485 [2024-07-15 09:40:01.445083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.485 [2024-07-15 09:40:01.445088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.485 [2024-07-15 09:40:01.445099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.485 qpair failed and we were unable to recover it. 00:31:14.485 [2024-07-15 09:40:01.454880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.485 [2024-07-15 09:40:01.454942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.485 [2024-07-15 09:40:01.454954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.485 [2024-07-15 09:40:01.454959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.485 [2024-07-15 09:40:01.454963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.485 [2024-07-15 09:40:01.454974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.485 qpair failed and we were unable to recover it. 00:31:14.485 [2024-07-15 09:40:01.464978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.485 [2024-07-15 09:40:01.465038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.485 [2024-07-15 09:40:01.465049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.485 [2024-07-15 09:40:01.465054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.485 [2024-07-15 09:40:01.465058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.485 [2024-07-15 09:40:01.465069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.485 qpair failed and we were unable to recover it. 00:31:14.485 [2024-07-15 09:40:01.475010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.485 [2024-07-15 09:40:01.475057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.485 [2024-07-15 09:40:01.475067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.485 [2024-07-15 09:40:01.475072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.485 [2024-07-15 09:40:01.475077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.485 [2024-07-15 09:40:01.475087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.485 qpair failed and we were unable to recover it. 00:31:14.485 [2024-07-15 09:40:01.485012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.485 [2024-07-15 09:40:01.485080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.485 [2024-07-15 09:40:01.485091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.485 [2024-07-15 09:40:01.485099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.485 [2024-07-15 09:40:01.485103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.485 [2024-07-15 09:40:01.485113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.485 qpair failed and we were unable to recover it. 00:31:14.485 [2024-07-15 09:40:01.495054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.485 [2024-07-15 09:40:01.495107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.485 [2024-07-15 09:40:01.495118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.485 [2024-07-15 09:40:01.495123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.485 [2024-07-15 09:40:01.495127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.485 [2024-07-15 09:40:01.495138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.485 qpair failed and we were unable to recover it. 00:31:14.485 [2024-07-15 09:40:01.505123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.485 [2024-07-15 09:40:01.505193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.485 [2024-07-15 09:40:01.505204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.485 [2024-07-15 09:40:01.505209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.485 [2024-07-15 09:40:01.505213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.485 [2024-07-15 09:40:01.505223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.485 qpair failed and we were unable to recover it. 00:31:14.485 [2024-07-15 09:40:01.515078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.485 [2024-07-15 09:40:01.515132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.485 [2024-07-15 09:40:01.515143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.485 [2024-07-15 09:40:01.515148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.485 [2024-07-15 09:40:01.515152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.485 [2024-07-15 09:40:01.515162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.485 qpair failed and we were unable to recover it. 00:31:14.485 [2024-07-15 09:40:01.525027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.485 [2024-07-15 09:40:01.525084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.485 [2024-07-15 09:40:01.525095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.485 [2024-07-15 09:40:01.525101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.485 [2024-07-15 09:40:01.525105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.485 [2024-07-15 09:40:01.525115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.485 qpair failed and we were unable to recover it. 00:31:14.485 [2024-07-15 09:40:01.535167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.485 [2024-07-15 09:40:01.535219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.485 [2024-07-15 09:40:01.535230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.485 [2024-07-15 09:40:01.535235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.485 [2024-07-15 09:40:01.535239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.486 [2024-07-15 09:40:01.535249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.486 qpair failed and we were unable to recover it. 00:31:14.486 [2024-07-15 09:40:01.545087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.486 [2024-07-15 09:40:01.545138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.486 [2024-07-15 09:40:01.545148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.486 [2024-07-15 09:40:01.545153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.486 [2024-07-15 09:40:01.545157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.486 [2024-07-15 09:40:01.545167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.486 qpair failed and we were unable to recover it. 00:31:14.486 [2024-07-15 09:40:01.555240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.486 [2024-07-15 09:40:01.555287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.486 [2024-07-15 09:40:01.555297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.486 [2024-07-15 09:40:01.555302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.486 [2024-07-15 09:40:01.555307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.486 [2024-07-15 09:40:01.555317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.486 qpair failed and we were unable to recover it. 00:31:14.486 [2024-07-15 09:40:01.565126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.486 [2024-07-15 09:40:01.565191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.486 [2024-07-15 09:40:01.565202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.486 [2024-07-15 09:40:01.565207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.486 [2024-07-15 09:40:01.565212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.486 [2024-07-15 09:40:01.565222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.486 qpair failed and we were unable to recover it. 00:31:14.486 [2024-07-15 09:40:01.575246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.486 [2024-07-15 09:40:01.575298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.486 [2024-07-15 09:40:01.575309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.486 [2024-07-15 09:40:01.575317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.486 [2024-07-15 09:40:01.575321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.486 [2024-07-15 09:40:01.575331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.486 qpair failed and we were unable to recover it. 00:31:14.486 [2024-07-15 09:40:01.585194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.486 [2024-07-15 09:40:01.585262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.486 [2024-07-15 09:40:01.585274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.486 [2024-07-15 09:40:01.585279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.486 [2024-07-15 09:40:01.585283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.486 [2024-07-15 09:40:01.585294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.486 qpair failed and we were unable to recover it. 00:31:14.486 [2024-07-15 09:40:01.595331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.486 [2024-07-15 09:40:01.595379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.486 [2024-07-15 09:40:01.595390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.486 [2024-07-15 09:40:01.595395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.486 [2024-07-15 09:40:01.595400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.486 [2024-07-15 09:40:01.595411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.486 qpair failed and we were unable to recover it. 00:31:14.486 [2024-07-15 09:40:01.605368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.486 [2024-07-15 09:40:01.605420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.486 [2024-07-15 09:40:01.605431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.486 [2024-07-15 09:40:01.605436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.486 [2024-07-15 09:40:01.605440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.486 [2024-07-15 09:40:01.605450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.486 qpair failed and we were unable to recover it. 00:31:14.486 [2024-07-15 09:40:01.615369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.486 [2024-07-15 09:40:01.615422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.486 [2024-07-15 09:40:01.615433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.486 [2024-07-15 09:40:01.615438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.486 [2024-07-15 09:40:01.615442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.486 [2024-07-15 09:40:01.615452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.486 qpair failed and we were unable to recover it. 00:31:14.486 [2024-07-15 09:40:01.625415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.486 [2024-07-15 09:40:01.625467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.486 [2024-07-15 09:40:01.625478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.486 [2024-07-15 09:40:01.625483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.486 [2024-07-15 09:40:01.625488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.486 [2024-07-15 09:40:01.625498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.486 qpair failed and we were unable to recover it. 00:31:14.486 [2024-07-15 09:40:01.635440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.486 [2024-07-15 09:40:01.635488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.486 [2024-07-15 09:40:01.635498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.486 [2024-07-15 09:40:01.635503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.486 [2024-07-15 09:40:01.635507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.486 [2024-07-15 09:40:01.635518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.486 qpair failed and we were unable to recover it. 00:31:14.486 [2024-07-15 09:40:01.645335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.486 [2024-07-15 09:40:01.645382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.486 [2024-07-15 09:40:01.645393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.486 [2024-07-15 09:40:01.645398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.486 [2024-07-15 09:40:01.645403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.486 [2024-07-15 09:40:01.645414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.486 qpair failed and we were unable to recover it. 00:31:14.486 [2024-07-15 09:40:01.655470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.486 [2024-07-15 09:40:01.655518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.486 [2024-07-15 09:40:01.655529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.486 [2024-07-15 09:40:01.655534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.486 [2024-07-15 09:40:01.655538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.486 [2024-07-15 09:40:01.655549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.486 qpair failed and we were unable to recover it. 00:31:14.486 [2024-07-15 09:40:01.665512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.486 [2024-07-15 09:40:01.665574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.486 [2024-07-15 09:40:01.665595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.486 [2024-07-15 09:40:01.665602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.486 [2024-07-15 09:40:01.665606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.486 [2024-07-15 09:40:01.665620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.486 qpair failed and we were unable to recover it. 00:31:14.749 [2024-07-15 09:40:01.675511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.749 [2024-07-15 09:40:01.675561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.749 [2024-07-15 09:40:01.675579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.749 [2024-07-15 09:40:01.675586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.749 [2024-07-15 09:40:01.675591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.749 [2024-07-15 09:40:01.675604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.749 qpair failed and we were unable to recover it. 00:31:14.749 [2024-07-15 09:40:01.685566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.749 [2024-07-15 09:40:01.685625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.749 [2024-07-15 09:40:01.685643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.749 [2024-07-15 09:40:01.685649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.749 [2024-07-15 09:40:01.685654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.749 [2024-07-15 09:40:01.685667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.749 qpair failed and we were unable to recover it. 00:31:14.749 [2024-07-15 09:40:01.695577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.749 [2024-07-15 09:40:01.695627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.749 [2024-07-15 09:40:01.695640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.749 [2024-07-15 09:40:01.695645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.749 [2024-07-15 09:40:01.695649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.749 [2024-07-15 09:40:01.695660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.749 qpair failed and we were unable to recover it. 00:31:14.749 [2024-07-15 09:40:01.705784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.749 [2024-07-15 09:40:01.705845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.749 [2024-07-15 09:40:01.705857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.749 [2024-07-15 09:40:01.705862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.749 [2024-07-15 09:40:01.705866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.749 [2024-07-15 09:40:01.705880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.750 qpair failed and we were unable to recover it. 00:31:14.750 [2024-07-15 09:40:01.715565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.750 [2024-07-15 09:40:01.715616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.750 [2024-07-15 09:40:01.715627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.750 [2024-07-15 09:40:01.715632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.750 [2024-07-15 09:40:01.715636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.750 [2024-07-15 09:40:01.715646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.750 qpair failed and we were unable to recover it. 00:31:14.750 [2024-07-15 09:40:01.725639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.750 [2024-07-15 09:40:01.725688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.750 [2024-07-15 09:40:01.725699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.750 [2024-07-15 09:40:01.725704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.750 [2024-07-15 09:40:01.725708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.750 [2024-07-15 09:40:01.725718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.750 qpair failed and we were unable to recover it. 00:31:14.750 [2024-07-15 09:40:01.735742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.750 [2024-07-15 09:40:01.735799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.750 [2024-07-15 09:40:01.735810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.750 [2024-07-15 09:40:01.735815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.750 [2024-07-15 09:40:01.735820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.750 [2024-07-15 09:40:01.735830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.750 qpair failed and we were unable to recover it. 00:31:14.750 [2024-07-15 09:40:01.745676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.750 [2024-07-15 09:40:01.745731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.750 [2024-07-15 09:40:01.745742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.750 [2024-07-15 09:40:01.745746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.750 [2024-07-15 09:40:01.745754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.750 [2024-07-15 09:40:01.745765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.750 qpair failed and we were unable to recover it. 00:31:14.750 [2024-07-15 09:40:01.755747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.750 [2024-07-15 09:40:01.755798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.750 [2024-07-15 09:40:01.755811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.750 [2024-07-15 09:40:01.755816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.750 [2024-07-15 09:40:01.755821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.750 [2024-07-15 09:40:01.755831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.750 qpair failed and we were unable to recover it. 00:31:14.750 [2024-07-15 09:40:01.765780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.750 [2024-07-15 09:40:01.765827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.750 [2024-07-15 09:40:01.765837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.750 [2024-07-15 09:40:01.765843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.750 [2024-07-15 09:40:01.765847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.750 [2024-07-15 09:40:01.765857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.750 qpair failed and we were unable to recover it. 00:31:14.750 [2024-07-15 09:40:01.775799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.750 [2024-07-15 09:40:01.775853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.750 [2024-07-15 09:40:01.775863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.750 [2024-07-15 09:40:01.775868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.750 [2024-07-15 09:40:01.775873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.750 [2024-07-15 09:40:01.775883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.750 qpair failed and we were unable to recover it. 00:31:14.750 [2024-07-15 09:40:01.785843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.750 [2024-07-15 09:40:01.785896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.750 [2024-07-15 09:40:01.785906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.750 [2024-07-15 09:40:01.785911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.750 [2024-07-15 09:40:01.785916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.750 [2024-07-15 09:40:01.785926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.750 qpair failed and we were unable to recover it. 00:31:14.750 [2024-07-15 09:40:01.795853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.750 [2024-07-15 09:40:01.795935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.750 [2024-07-15 09:40:01.795946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.750 [2024-07-15 09:40:01.795951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.750 [2024-07-15 09:40:01.795961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.750 [2024-07-15 09:40:01.795971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.750 qpair failed and we were unable to recover it. 00:31:14.750 [2024-07-15 09:40:01.805897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.750 [2024-07-15 09:40:01.805947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.750 [2024-07-15 09:40:01.805958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.750 [2024-07-15 09:40:01.805963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.750 [2024-07-15 09:40:01.805967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.750 [2024-07-15 09:40:01.805978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.750 qpair failed and we were unable to recover it. 00:31:14.750 [2024-07-15 09:40:01.815796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.750 [2024-07-15 09:40:01.815848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.750 [2024-07-15 09:40:01.815860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.750 [2024-07-15 09:40:01.815866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.750 [2024-07-15 09:40:01.815870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.750 [2024-07-15 09:40:01.815881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.750 qpair failed and we were unable to recover it. 00:31:14.750 [2024-07-15 09:40:01.825953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.751 [2024-07-15 09:40:01.826011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.751 [2024-07-15 09:40:01.826022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.751 [2024-07-15 09:40:01.826027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.751 [2024-07-15 09:40:01.826031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.751 [2024-07-15 09:40:01.826042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.751 qpair failed and we were unable to recover it. 00:31:14.751 [2024-07-15 09:40:01.835990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.751 [2024-07-15 09:40:01.836040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.751 [2024-07-15 09:40:01.836050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.751 [2024-07-15 09:40:01.836056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.751 [2024-07-15 09:40:01.836060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.751 [2024-07-15 09:40:01.836070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.751 qpair failed and we were unable to recover it. 00:31:14.751 [2024-07-15 09:40:01.846031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.751 [2024-07-15 09:40:01.846089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.751 [2024-07-15 09:40:01.846099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.751 [2024-07-15 09:40:01.846104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.751 [2024-07-15 09:40:01.846109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.751 [2024-07-15 09:40:01.846119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.751 qpair failed and we were unable to recover it. 00:31:14.751 [2024-07-15 09:40:01.856030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.751 [2024-07-15 09:40:01.856080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.751 [2024-07-15 09:40:01.856090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.751 [2024-07-15 09:40:01.856095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.751 [2024-07-15 09:40:01.856100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.751 [2024-07-15 09:40:01.856110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.751 qpair failed and we were unable to recover it. 00:31:14.751 [2024-07-15 09:40:01.865969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.751 [2024-07-15 09:40:01.866022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.751 [2024-07-15 09:40:01.866033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.751 [2024-07-15 09:40:01.866038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.751 [2024-07-15 09:40:01.866042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.751 [2024-07-15 09:40:01.866053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.751 qpair failed and we were unable to recover it. 00:31:14.751 [2024-07-15 09:40:01.875945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.751 [2024-07-15 09:40:01.875991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.751 [2024-07-15 09:40:01.876002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.751 [2024-07-15 09:40:01.876007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.751 [2024-07-15 09:40:01.876012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.751 [2024-07-15 09:40:01.876022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.751 qpair failed and we were unable to recover it. 00:31:14.751 [2024-07-15 09:40:01.886110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.751 [2024-07-15 09:40:01.886158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.751 [2024-07-15 09:40:01.886168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.751 [2024-07-15 09:40:01.886176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.751 [2024-07-15 09:40:01.886180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.751 [2024-07-15 09:40:01.886191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.751 qpair failed and we were unable to recover it. 00:31:14.751 [2024-07-15 09:40:01.896132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.751 [2024-07-15 09:40:01.896193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.751 [2024-07-15 09:40:01.896204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.751 [2024-07-15 09:40:01.896210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.751 [2024-07-15 09:40:01.896214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.751 [2024-07-15 09:40:01.896225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.751 qpair failed and we were unable to recover it. 00:31:14.751 [2024-07-15 09:40:01.906173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.751 [2024-07-15 09:40:01.906253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.751 [2024-07-15 09:40:01.906263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.751 [2024-07-15 09:40:01.906268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.751 [2024-07-15 09:40:01.906273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.751 [2024-07-15 09:40:01.906283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.751 qpair failed and we were unable to recover it. 00:31:14.751 [2024-07-15 09:40:01.916211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.751 [2024-07-15 09:40:01.916258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.751 [2024-07-15 09:40:01.916269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.751 [2024-07-15 09:40:01.916274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.751 [2024-07-15 09:40:01.916278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.751 [2024-07-15 09:40:01.916289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.751 qpair failed and we were unable to recover it. 00:31:14.751 [2024-07-15 09:40:01.926231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.751 [2024-07-15 09:40:01.926280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.751 [2024-07-15 09:40:01.926291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.751 [2024-07-15 09:40:01.926296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.751 [2024-07-15 09:40:01.926303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.752 [2024-07-15 09:40:01.926314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.752 qpair failed and we were unable to recover it. 00:31:14.752 [2024-07-15 09:40:01.936139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.752 [2024-07-15 09:40:01.936197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.752 [2024-07-15 09:40:01.936208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.752 [2024-07-15 09:40:01.936212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.752 [2024-07-15 09:40:01.936217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.752 [2024-07-15 09:40:01.936227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.752 qpair failed and we were unable to recover it. 00:31:14.752 [2024-07-15 09:40:01.946304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.752 [2024-07-15 09:40:01.946405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.752 [2024-07-15 09:40:01.946416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.752 [2024-07-15 09:40:01.946421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.752 [2024-07-15 09:40:01.946426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:14.752 [2024-07-15 09:40:01.946436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.752 qpair failed and we were unable to recover it. 00:31:15.014 [2024-07-15 09:40:01.956312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.014 [2024-07-15 09:40:01.956362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.014 [2024-07-15 09:40:01.956373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.014 [2024-07-15 09:40:01.956378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.014 [2024-07-15 09:40:01.956382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.014 [2024-07-15 09:40:01.956393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.014 qpair failed and we were unable to recover it. 00:31:15.014 [2024-07-15 09:40:01.966342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.014 [2024-07-15 09:40:01.966393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.014 [2024-07-15 09:40:01.966404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.014 [2024-07-15 09:40:01.966409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.014 [2024-07-15 09:40:01.966414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.014 [2024-07-15 09:40:01.966425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.014 qpair failed and we were unable to recover it. 00:31:15.014 [2024-07-15 09:40:01.976325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.014 [2024-07-15 09:40:01.976375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.014 [2024-07-15 09:40:01.976385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.015 [2024-07-15 09:40:01.976393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.015 [2024-07-15 09:40:01.976398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.015 [2024-07-15 09:40:01.976408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.015 qpair failed and we were unable to recover it. 00:31:15.015 [2024-07-15 09:40:01.986408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.015 [2024-07-15 09:40:01.986468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.015 [2024-07-15 09:40:01.986480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.015 [2024-07-15 09:40:01.986486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.015 [2024-07-15 09:40:01.986491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.015 [2024-07-15 09:40:01.986501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.015 qpair failed and we were unable to recover it. 00:31:15.015 [2024-07-15 09:40:01.996421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.015 [2024-07-15 09:40:01.996470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.015 [2024-07-15 09:40:01.996482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.015 [2024-07-15 09:40:01.996487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.015 [2024-07-15 09:40:01.996491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.015 [2024-07-15 09:40:01.996502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.015 qpair failed and we were unable to recover it. 00:31:15.015 [2024-07-15 09:40:02.006321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.015 [2024-07-15 09:40:02.006367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.015 [2024-07-15 09:40:02.006378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.015 [2024-07-15 09:40:02.006383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.015 [2024-07-15 09:40:02.006387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.015 [2024-07-15 09:40:02.006398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.015 qpair failed and we were unable to recover it. 00:31:15.015 [2024-07-15 09:40:02.016472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.015 [2024-07-15 09:40:02.016521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.015 [2024-07-15 09:40:02.016531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.015 [2024-07-15 09:40:02.016537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.015 [2024-07-15 09:40:02.016541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.015 [2024-07-15 09:40:02.016551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.015 qpair failed and we were unable to recover it. 00:31:15.015 [2024-07-15 09:40:02.026517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.015 [2024-07-15 09:40:02.026572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.015 [2024-07-15 09:40:02.026583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.015 [2024-07-15 09:40:02.026588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.015 [2024-07-15 09:40:02.026593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.015 [2024-07-15 09:40:02.026603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.015 qpair failed and we were unable to recover it. 00:31:15.015 [2024-07-15 09:40:02.036403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.015 [2024-07-15 09:40:02.036450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.015 [2024-07-15 09:40:02.036461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.015 [2024-07-15 09:40:02.036466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.015 [2024-07-15 09:40:02.036470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.015 [2024-07-15 09:40:02.036481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.015 qpair failed and we were unable to recover it. 00:31:15.015 [2024-07-15 09:40:02.046555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.015 [2024-07-15 09:40:02.046609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.015 [2024-07-15 09:40:02.046620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.015 [2024-07-15 09:40:02.046625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.015 [2024-07-15 09:40:02.046629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.015 [2024-07-15 09:40:02.046640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.015 qpair failed and we were unable to recover it. 00:31:15.015 [2024-07-15 09:40:02.056558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.015 [2024-07-15 09:40:02.056606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.015 [2024-07-15 09:40:02.056617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.015 [2024-07-15 09:40:02.056622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.015 [2024-07-15 09:40:02.056626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.015 [2024-07-15 09:40:02.056636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.015 qpair failed and we were unable to recover it. 00:31:15.015 [2024-07-15 09:40:02.066630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.015 [2024-07-15 09:40:02.066689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.015 [2024-07-15 09:40:02.066702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.015 [2024-07-15 09:40:02.066707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.015 [2024-07-15 09:40:02.066712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.015 [2024-07-15 09:40:02.066722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.015 qpair failed and we were unable to recover it. 00:31:15.015 [2024-07-15 09:40:02.076519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.015 [2024-07-15 09:40:02.076563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.015 [2024-07-15 09:40:02.076574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.015 [2024-07-15 09:40:02.076579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.015 [2024-07-15 09:40:02.076584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.015 [2024-07-15 09:40:02.076594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.015 qpair failed and we were unable to recover it. 00:31:15.015 [2024-07-15 09:40:02.086668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.015 [2024-07-15 09:40:02.086716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.015 [2024-07-15 09:40:02.086726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.015 [2024-07-15 09:40:02.086732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.015 [2024-07-15 09:40:02.086736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.015 [2024-07-15 09:40:02.086746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.015 qpair failed and we were unable to recover it. 00:31:15.015 [2024-07-15 09:40:02.096677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.015 [2024-07-15 09:40:02.096728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.015 [2024-07-15 09:40:02.096739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.015 [2024-07-15 09:40:02.096744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.015 [2024-07-15 09:40:02.096749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.015 [2024-07-15 09:40:02.096763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.015 qpair failed and we were unable to recover it. 00:31:15.015 [2024-07-15 09:40:02.106727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.015 [2024-07-15 09:40:02.106785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.015 [2024-07-15 09:40:02.106796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.015 [2024-07-15 09:40:02.106801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.015 [2024-07-15 09:40:02.106805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.015 [2024-07-15 09:40:02.106818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.015 qpair failed and we were unable to recover it. 00:31:15.015 [2024-07-15 09:40:02.116746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.015 [2024-07-15 09:40:02.116793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.015 [2024-07-15 09:40:02.116803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.015 [2024-07-15 09:40:02.116808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.015 [2024-07-15 09:40:02.116813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.016 [2024-07-15 09:40:02.116823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.016 qpair failed and we were unable to recover it. 00:31:15.016 [2024-07-15 09:40:02.126785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.016 [2024-07-15 09:40:02.126875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.016 [2024-07-15 09:40:02.126888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.016 [2024-07-15 09:40:02.126893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.016 [2024-07-15 09:40:02.126898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.016 [2024-07-15 09:40:02.126908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.016 qpair failed and we were unable to recover it. 00:31:15.016 [2024-07-15 09:40:02.136798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.016 [2024-07-15 09:40:02.136849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.016 [2024-07-15 09:40:02.136860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.016 [2024-07-15 09:40:02.136865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.016 [2024-07-15 09:40:02.136869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.016 [2024-07-15 09:40:02.136880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.016 qpair failed and we were unable to recover it. 00:31:15.016 [2024-07-15 09:40:02.146833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.016 [2024-07-15 09:40:02.146888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.016 [2024-07-15 09:40:02.146898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.016 [2024-07-15 09:40:02.146903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.016 [2024-07-15 09:40:02.146908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.016 [2024-07-15 09:40:02.146918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.016 qpair failed and we were unable to recover it. 00:31:15.016 [2024-07-15 09:40:02.156852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.016 [2024-07-15 09:40:02.156901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.016 [2024-07-15 09:40:02.156915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.016 [2024-07-15 09:40:02.156919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.016 [2024-07-15 09:40:02.156924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.016 [2024-07-15 09:40:02.156934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.016 qpair failed and we were unable to recover it. 00:31:15.016 [2024-07-15 09:40:02.166795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.016 [2024-07-15 09:40:02.166895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.016 [2024-07-15 09:40:02.166907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.016 [2024-07-15 09:40:02.166912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.016 [2024-07-15 09:40:02.166916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.016 [2024-07-15 09:40:02.166926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.016 qpair failed and we were unable to recover it. 00:31:15.016 [2024-07-15 09:40:02.176926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.016 [2024-07-15 09:40:02.177000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.016 [2024-07-15 09:40:02.177011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.016 [2024-07-15 09:40:02.177016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.016 [2024-07-15 09:40:02.177020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.016 [2024-07-15 09:40:02.177031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.016 qpair failed and we were unable to recover it. 00:31:15.016 [2024-07-15 09:40:02.186955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.016 [2024-07-15 09:40:02.187007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.016 [2024-07-15 09:40:02.187017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.016 [2024-07-15 09:40:02.187023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.016 [2024-07-15 09:40:02.187027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.016 [2024-07-15 09:40:02.187037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.016 qpair failed and we were unable to recover it. 00:31:15.016 [2024-07-15 09:40:02.197015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.016 [2024-07-15 09:40:02.197063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.016 [2024-07-15 09:40:02.197073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.016 [2024-07-15 09:40:02.197079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.016 [2024-07-15 09:40:02.197086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.016 [2024-07-15 09:40:02.197097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.016 qpair failed and we were unable to recover it. 00:31:15.016 [2024-07-15 09:40:02.207000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.016 [2024-07-15 09:40:02.207056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.016 [2024-07-15 09:40:02.207067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.016 [2024-07-15 09:40:02.207072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.016 [2024-07-15 09:40:02.207076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.016 [2024-07-15 09:40:02.207087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.016 qpair failed and we were unable to recover it. 00:31:15.279 [2024-07-15 09:40:02.217031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.279 [2024-07-15 09:40:02.217122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.279 [2024-07-15 09:40:02.217133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.279 [2024-07-15 09:40:02.217138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.279 [2024-07-15 09:40:02.217143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.279 [2024-07-15 09:40:02.217153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.279 qpair failed and we were unable to recover it. 00:31:15.279 [2024-07-15 09:40:02.227066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.279 [2024-07-15 09:40:02.227124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.279 [2024-07-15 09:40:02.227134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.279 [2024-07-15 09:40:02.227139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.279 [2024-07-15 09:40:02.227144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.280 [2024-07-15 09:40:02.227154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.280 qpair failed and we were unable to recover it. 00:31:15.280 [2024-07-15 09:40:02.237070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.280 [2024-07-15 09:40:02.237123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.280 [2024-07-15 09:40:02.237134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.280 [2024-07-15 09:40:02.237139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.280 [2024-07-15 09:40:02.237144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.280 [2024-07-15 09:40:02.237154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.280 qpair failed and we were unable to recover it. 00:31:15.280 [2024-07-15 09:40:02.247116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.280 [2024-07-15 09:40:02.247175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.280 [2024-07-15 09:40:02.247186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.280 [2024-07-15 09:40:02.247191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.280 [2024-07-15 09:40:02.247196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.280 [2024-07-15 09:40:02.247206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.280 qpair failed and we were unable to recover it. 00:31:15.280 [2024-07-15 09:40:02.257142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.280 [2024-07-15 09:40:02.257196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.280 [2024-07-15 09:40:02.257207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.280 [2024-07-15 09:40:02.257211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.280 [2024-07-15 09:40:02.257216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.280 [2024-07-15 09:40:02.257226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.280 qpair failed and we were unable to recover it. 00:31:15.280 [2024-07-15 09:40:02.267106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.280 [2024-07-15 09:40:02.267162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.280 [2024-07-15 09:40:02.267173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.280 [2024-07-15 09:40:02.267178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.280 [2024-07-15 09:40:02.267182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.280 [2024-07-15 09:40:02.267193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.280 qpair failed and we were unable to recover it. 00:31:15.280 [2024-07-15 09:40:02.277079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.280 [2024-07-15 09:40:02.277128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.280 [2024-07-15 09:40:02.277138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.280 [2024-07-15 09:40:02.277143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.280 [2024-07-15 09:40:02.277147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.280 [2024-07-15 09:40:02.277158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.280 qpair failed and we were unable to recover it. 00:31:15.280 [2024-07-15 09:40:02.287247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.280 [2024-07-15 09:40:02.287299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.280 [2024-07-15 09:40:02.287309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.280 [2024-07-15 09:40:02.287314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.280 [2024-07-15 09:40:02.287322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.280 [2024-07-15 09:40:02.287332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.280 qpair failed and we were unable to recover it. 00:31:15.280 [2024-07-15 09:40:02.297269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.280 [2024-07-15 09:40:02.297317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.280 [2024-07-15 09:40:02.297328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.280 [2024-07-15 09:40:02.297333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.280 [2024-07-15 09:40:02.297338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.280 [2024-07-15 09:40:02.297348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.280 qpair failed and we were unable to recover it. 00:31:15.280 [2024-07-15 09:40:02.307165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.280 [2024-07-15 09:40:02.307223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.280 [2024-07-15 09:40:02.307234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.280 [2024-07-15 09:40:02.307239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.280 [2024-07-15 09:40:02.307243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.280 [2024-07-15 09:40:02.307253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.280 qpair failed and we were unable to recover it. 00:31:15.280 [2024-07-15 09:40:02.317295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.280 [2024-07-15 09:40:02.317341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.280 [2024-07-15 09:40:02.317352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.280 [2024-07-15 09:40:02.317357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.280 [2024-07-15 09:40:02.317361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.280 [2024-07-15 09:40:02.317371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.280 qpair failed and we were unable to recover it. 00:31:15.280 [2024-07-15 09:40:02.327352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.280 [2024-07-15 09:40:02.327402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.280 [2024-07-15 09:40:02.327413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.280 [2024-07-15 09:40:02.327418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.280 [2024-07-15 09:40:02.327422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.280 [2024-07-15 09:40:02.327432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.280 qpair failed and we were unable to recover it. 00:31:15.280 [2024-07-15 09:40:02.337364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.280 [2024-07-15 09:40:02.337414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.280 [2024-07-15 09:40:02.337425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.280 [2024-07-15 09:40:02.337430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.280 [2024-07-15 09:40:02.337434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.280 [2024-07-15 09:40:02.337444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.280 qpair failed and we were unable to recover it. 00:31:15.280 [2024-07-15 09:40:02.347410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.280 [2024-07-15 09:40:02.347460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.280 [2024-07-15 09:40:02.347470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.280 [2024-07-15 09:40:02.347475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.280 [2024-07-15 09:40:02.347480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.280 [2024-07-15 09:40:02.347489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.280 qpair failed and we were unable to recover it. 00:31:15.280 [2024-07-15 09:40:02.357422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.280 [2024-07-15 09:40:02.357471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.280 [2024-07-15 09:40:02.357482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.280 [2024-07-15 09:40:02.357487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.280 [2024-07-15 09:40:02.357492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.280 [2024-07-15 09:40:02.357502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.280 qpair failed and we were unable to recover it. 00:31:15.280 [2024-07-15 09:40:02.367463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.280 [2024-07-15 09:40:02.367515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.280 [2024-07-15 09:40:02.367526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.280 [2024-07-15 09:40:02.367531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.280 [2024-07-15 09:40:02.367536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.280 [2024-07-15 09:40:02.367545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.280 qpair failed and we were unable to recover it. 00:31:15.281 [2024-07-15 09:40:02.377481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.281 [2024-07-15 09:40:02.377531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.281 [2024-07-15 09:40:02.377542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.281 [2024-07-15 09:40:02.377549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.281 [2024-07-15 09:40:02.377554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.281 [2024-07-15 09:40:02.377564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.281 qpair failed and we were unable to recover it. 00:31:15.281 [2024-07-15 09:40:02.387394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.281 [2024-07-15 09:40:02.387452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.281 [2024-07-15 09:40:02.387464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.281 [2024-07-15 09:40:02.387469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.281 [2024-07-15 09:40:02.387473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.281 [2024-07-15 09:40:02.387483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.281 qpair failed and we were unable to recover it. 00:31:15.281 [2024-07-15 09:40:02.397531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.281 [2024-07-15 09:40:02.397590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.281 [2024-07-15 09:40:02.397603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.281 [2024-07-15 09:40:02.397609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.281 [2024-07-15 09:40:02.397614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.281 [2024-07-15 09:40:02.397625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.281 qpair failed and we were unable to recover it. 00:31:15.281 [2024-07-15 09:40:02.407579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.281 [2024-07-15 09:40:02.407636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.281 [2024-07-15 09:40:02.407648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.281 [2024-07-15 09:40:02.407653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.281 [2024-07-15 09:40:02.407659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.281 [2024-07-15 09:40:02.407669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.281 qpair failed and we were unable to recover it. 00:31:15.281 [2024-07-15 09:40:02.417602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.281 [2024-07-15 09:40:02.417663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.281 [2024-07-15 09:40:02.417674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.281 [2024-07-15 09:40:02.417679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.281 [2024-07-15 09:40:02.417683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.281 [2024-07-15 09:40:02.417693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.281 qpair failed and we were unable to recover it. 00:31:15.281 [2024-07-15 09:40:02.427686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.281 [2024-07-15 09:40:02.427761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.281 [2024-07-15 09:40:02.427773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.281 [2024-07-15 09:40:02.427778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.281 [2024-07-15 09:40:02.427782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.281 [2024-07-15 09:40:02.427793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.281 qpair failed and we were unable to recover it. 00:31:15.281 [2024-07-15 09:40:02.437545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.281 [2024-07-15 09:40:02.437602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.281 [2024-07-15 09:40:02.437614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.281 [2024-07-15 09:40:02.437619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.281 [2024-07-15 09:40:02.437624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.281 [2024-07-15 09:40:02.437634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.281 qpair failed and we were unable to recover it. 00:31:15.281 [2024-07-15 09:40:02.447696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.281 [2024-07-15 09:40:02.447744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.281 [2024-07-15 09:40:02.447759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.281 [2024-07-15 09:40:02.447764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.281 [2024-07-15 09:40:02.447769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.281 [2024-07-15 09:40:02.447779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.281 qpair failed and we were unable to recover it. 00:31:15.281 [2024-07-15 09:40:02.457710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.281 [2024-07-15 09:40:02.457797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.281 [2024-07-15 09:40:02.457809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.281 [2024-07-15 09:40:02.457814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.281 [2024-07-15 09:40:02.457819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.281 [2024-07-15 09:40:02.457829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.281 qpair failed and we were unable to recover it. 00:31:15.281 [2024-07-15 09:40:02.467789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.281 [2024-07-15 09:40:02.467850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.281 [2024-07-15 09:40:02.467863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.281 [2024-07-15 09:40:02.467869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.281 [2024-07-15 09:40:02.467873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.281 [2024-07-15 09:40:02.467884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.281 qpair failed and we were unable to recover it. 00:31:15.281 [2024-07-15 09:40:02.477742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.281 [2024-07-15 09:40:02.477803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.281 [2024-07-15 09:40:02.477814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.281 [2024-07-15 09:40:02.477819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.281 [2024-07-15 09:40:02.477823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.281 [2024-07-15 09:40:02.477834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.281 qpair failed and we were unable to recover it. 00:31:15.544 [2024-07-15 09:40:02.487797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.544 [2024-07-15 09:40:02.487853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.544 [2024-07-15 09:40:02.487864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.544 [2024-07-15 09:40:02.487869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.544 [2024-07-15 09:40:02.487874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.544 [2024-07-15 09:40:02.487884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.544 qpair failed and we were unable to recover it. 00:31:15.544 [2024-07-15 09:40:02.497836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.544 [2024-07-15 09:40:02.497889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.544 [2024-07-15 09:40:02.497900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.544 [2024-07-15 09:40:02.497905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.544 [2024-07-15 09:40:02.497910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.544 [2024-07-15 09:40:02.497921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.544 qpair failed and we were unable to recover it. 00:31:15.544 [2024-07-15 09:40:02.507742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.544 [2024-07-15 09:40:02.507801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.544 [2024-07-15 09:40:02.507813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.544 [2024-07-15 09:40:02.507818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.545 [2024-07-15 09:40:02.507823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.545 [2024-07-15 09:40:02.507836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.545 qpair failed and we were unable to recover it. 00:31:15.545 [2024-07-15 09:40:02.517856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.545 [2024-07-15 09:40:02.517910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.545 [2024-07-15 09:40:02.517922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.545 [2024-07-15 09:40:02.517926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.545 [2024-07-15 09:40:02.517931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.545 [2024-07-15 09:40:02.517942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.545 qpair failed and we were unable to recover it. 00:31:15.545 [2024-07-15 09:40:02.527918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.545 [2024-07-15 09:40:02.527973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.545 [2024-07-15 09:40:02.527984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.545 [2024-07-15 09:40:02.527989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.545 [2024-07-15 09:40:02.527994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.545 [2024-07-15 09:40:02.528005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.545 qpair failed and we were unable to recover it. 00:31:15.545 [2024-07-15 09:40:02.538002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.545 [2024-07-15 09:40:02.538067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.545 [2024-07-15 09:40:02.538078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.545 [2024-07-15 09:40:02.538083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.545 [2024-07-15 09:40:02.538088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.545 [2024-07-15 09:40:02.538098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.545 qpair failed and we were unable to recover it. 00:31:15.545 [2024-07-15 09:40:02.548009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.545 [2024-07-15 09:40:02.548102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.545 [2024-07-15 09:40:02.548114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.545 [2024-07-15 09:40:02.548119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.545 [2024-07-15 09:40:02.548124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.545 [2024-07-15 09:40:02.548134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.545 qpair failed and we were unable to recover it. 00:31:15.545 [2024-07-15 09:40:02.558010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.545 [2024-07-15 09:40:02.558057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.545 [2024-07-15 09:40:02.558074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.545 [2024-07-15 09:40:02.558079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.545 [2024-07-15 09:40:02.558084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.545 [2024-07-15 09:40:02.558094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.545 qpair failed and we were unable to recover it. 00:31:15.545 [2024-07-15 09:40:02.567906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.545 [2024-07-15 09:40:02.567995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.545 [2024-07-15 09:40:02.568007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.545 [2024-07-15 09:40:02.568012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.545 [2024-07-15 09:40:02.568016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.545 [2024-07-15 09:40:02.568027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.545 qpair failed and we were unable to recover it. 00:31:15.545 [2024-07-15 09:40:02.578071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.545 [2024-07-15 09:40:02.578122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.545 [2024-07-15 09:40:02.578133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.545 [2024-07-15 09:40:02.578138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.545 [2024-07-15 09:40:02.578143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.545 [2024-07-15 09:40:02.578153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.545 qpair failed and we were unable to recover it. 00:31:15.545 [2024-07-15 09:40:02.588116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.545 [2024-07-15 09:40:02.588182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.545 [2024-07-15 09:40:02.588193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.545 [2024-07-15 09:40:02.588198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.545 [2024-07-15 09:40:02.588202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.545 [2024-07-15 09:40:02.588212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.545 qpair failed and we were unable to recover it. 00:31:15.545 [2024-07-15 09:40:02.598113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.545 [2024-07-15 09:40:02.598196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.545 [2024-07-15 09:40:02.598207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.545 [2024-07-15 09:40:02.598212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.545 [2024-07-15 09:40:02.598217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.545 [2024-07-15 09:40:02.598230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.545 qpair failed and we were unable to recover it. 00:31:15.545 [2024-07-15 09:40:02.608139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.545 [2024-07-15 09:40:02.608186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.545 [2024-07-15 09:40:02.608197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.545 [2024-07-15 09:40:02.608202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.545 [2024-07-15 09:40:02.608206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.545 [2024-07-15 09:40:02.608216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.545 qpair failed and we were unable to recover it. 00:31:15.545 [2024-07-15 09:40:02.618198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.545 [2024-07-15 09:40:02.618252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.545 [2024-07-15 09:40:02.618262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.545 [2024-07-15 09:40:02.618267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.545 [2024-07-15 09:40:02.618272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.545 [2024-07-15 09:40:02.618282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.545 qpair failed and we were unable to recover it. 00:31:15.545 [2024-07-15 09:40:02.628074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.545 [2024-07-15 09:40:02.628126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.545 [2024-07-15 09:40:02.628137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.545 [2024-07-15 09:40:02.628142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.545 [2024-07-15 09:40:02.628146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.545 [2024-07-15 09:40:02.628157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.545 qpair failed and we were unable to recover it. 00:31:15.545 [2024-07-15 09:40:02.638194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.545 [2024-07-15 09:40:02.638248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.545 [2024-07-15 09:40:02.638258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.545 [2024-07-15 09:40:02.638263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.545 [2024-07-15 09:40:02.638268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.545 [2024-07-15 09:40:02.638278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.545 qpair failed and we were unable to recover it. 00:31:15.545 [2024-07-15 09:40:02.648245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.545 [2024-07-15 09:40:02.648342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.545 [2024-07-15 09:40:02.648354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.545 [2024-07-15 09:40:02.648359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.545 [2024-07-15 09:40:02.648363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.545 [2024-07-15 09:40:02.648374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.546 qpair failed and we were unable to recover it. 00:31:15.546 [2024-07-15 09:40:02.658208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.546 [2024-07-15 09:40:02.658264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.546 [2024-07-15 09:40:02.658274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.546 [2024-07-15 09:40:02.658279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.546 [2024-07-15 09:40:02.658284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.546 [2024-07-15 09:40:02.658294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.546 qpair failed and we were unable to recover it. 00:31:15.546 [2024-07-15 09:40:02.668295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.546 [2024-07-15 09:40:02.668383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.546 [2024-07-15 09:40:02.668394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.546 [2024-07-15 09:40:02.668399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.546 [2024-07-15 09:40:02.668404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.546 [2024-07-15 09:40:02.668414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.546 qpair failed and we were unable to recover it. 00:31:15.546 [2024-07-15 09:40:02.678298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.546 [2024-07-15 09:40:02.678345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.546 [2024-07-15 09:40:02.678356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.546 [2024-07-15 09:40:02.678361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.546 [2024-07-15 09:40:02.678365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.546 [2024-07-15 09:40:02.678375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.546 qpair failed and we were unable to recover it. 00:31:15.546 [2024-07-15 09:40:02.688346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.546 [2024-07-15 09:40:02.688390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.546 [2024-07-15 09:40:02.688401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.546 [2024-07-15 09:40:02.688406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.546 [2024-07-15 09:40:02.688413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.546 [2024-07-15 09:40:02.688423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.546 qpair failed and we were unable to recover it. 00:31:15.546 [2024-07-15 09:40:02.698391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.546 [2024-07-15 09:40:02.698443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.546 [2024-07-15 09:40:02.698454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.546 [2024-07-15 09:40:02.698459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.546 [2024-07-15 09:40:02.698464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.546 [2024-07-15 09:40:02.698474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.546 qpair failed and we were unable to recover it. 00:31:15.546 [2024-07-15 09:40:02.708414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.546 [2024-07-15 09:40:02.708474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.546 [2024-07-15 09:40:02.708492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.546 [2024-07-15 09:40:02.708498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.546 [2024-07-15 09:40:02.708503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.546 [2024-07-15 09:40:02.708517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.546 qpair failed and we were unable to recover it. 00:31:15.546 [2024-07-15 09:40:02.718435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.546 [2024-07-15 09:40:02.718492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.546 [2024-07-15 09:40:02.718510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.546 [2024-07-15 09:40:02.718516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.546 [2024-07-15 09:40:02.718521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.546 [2024-07-15 09:40:02.718535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.546 qpair failed and we were unable to recover it. 00:31:15.546 [2024-07-15 09:40:02.728475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.546 [2024-07-15 09:40:02.728528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.546 [2024-07-15 09:40:02.728546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.546 [2024-07-15 09:40:02.728552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.546 [2024-07-15 09:40:02.728557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.546 [2024-07-15 09:40:02.728571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.546 qpair failed and we were unable to recover it. 00:31:15.546 [2024-07-15 09:40:02.738502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.546 [2024-07-15 09:40:02.738559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.546 [2024-07-15 09:40:02.738578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.546 [2024-07-15 09:40:02.738583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.546 [2024-07-15 09:40:02.738588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.546 [2024-07-15 09:40:02.738602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.546 qpair failed and we were unable to recover it. 00:31:15.809 [2024-07-15 09:40:02.748574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.809 [2024-07-15 09:40:02.748628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.809 [2024-07-15 09:40:02.748641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.809 [2024-07-15 09:40:02.748646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.809 [2024-07-15 09:40:02.748650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.809 [2024-07-15 09:40:02.748661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.809 qpair failed and we were unable to recover it. 00:31:15.809 [2024-07-15 09:40:02.758514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.809 [2024-07-15 09:40:02.758563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.809 [2024-07-15 09:40:02.758574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.809 [2024-07-15 09:40:02.758580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.809 [2024-07-15 09:40:02.758584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.810 [2024-07-15 09:40:02.758595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.810 qpair failed and we were unable to recover it. 00:31:15.810 [2024-07-15 09:40:02.768578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.810 [2024-07-15 09:40:02.768632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.810 [2024-07-15 09:40:02.768644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.810 [2024-07-15 09:40:02.768649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.810 [2024-07-15 09:40:02.768653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.810 [2024-07-15 09:40:02.768663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.810 qpair failed and we were unable to recover it. 00:31:15.810 [2024-07-15 09:40:02.778614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.810 [2024-07-15 09:40:02.778665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.810 [2024-07-15 09:40:02.778676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.810 [2024-07-15 09:40:02.778685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.810 [2024-07-15 09:40:02.778689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.810 [2024-07-15 09:40:02.778700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.810 qpair failed and we were unable to recover it. 00:31:15.810 [2024-07-15 09:40:02.788517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.810 [2024-07-15 09:40:02.788580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.810 [2024-07-15 09:40:02.788591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.810 [2024-07-15 09:40:02.788597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.810 [2024-07-15 09:40:02.788601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.810 [2024-07-15 09:40:02.788611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.810 qpair failed and we were unable to recover it. 00:31:15.810 [2024-07-15 09:40:02.798650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.810 [2024-07-15 09:40:02.798700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.810 [2024-07-15 09:40:02.798711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.810 [2024-07-15 09:40:02.798717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.810 [2024-07-15 09:40:02.798721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.810 [2024-07-15 09:40:02.798732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.810 qpair failed and we were unable to recover it. 00:31:15.810 [2024-07-15 09:40:02.808682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.810 [2024-07-15 09:40:02.808734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.810 [2024-07-15 09:40:02.808745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.810 [2024-07-15 09:40:02.808754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.810 [2024-07-15 09:40:02.808759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.810 [2024-07-15 09:40:02.808770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.810 qpair failed and we were unable to recover it. 00:31:15.810 [2024-07-15 09:40:02.818720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.810 [2024-07-15 09:40:02.818781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.810 [2024-07-15 09:40:02.818793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.810 [2024-07-15 09:40:02.818798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.810 [2024-07-15 09:40:02.818803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.810 [2024-07-15 09:40:02.818813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.810 qpair failed and we were unable to recover it. 00:31:15.810 [2024-07-15 09:40:02.828628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.810 [2024-07-15 09:40:02.828681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.810 [2024-07-15 09:40:02.828693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.810 [2024-07-15 09:40:02.828699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.810 [2024-07-15 09:40:02.828704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.810 [2024-07-15 09:40:02.828714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.810 qpair failed and we were unable to recover it. 00:31:15.810 [2024-07-15 09:40:02.838768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.810 [2024-07-15 09:40:02.838824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.810 [2024-07-15 09:40:02.838836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.810 [2024-07-15 09:40:02.838841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.810 [2024-07-15 09:40:02.838846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.810 [2024-07-15 09:40:02.838857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.810 qpair failed and we were unable to recover it. 00:31:15.810 [2024-07-15 09:40:02.848808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.810 [2024-07-15 09:40:02.848854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.810 [2024-07-15 09:40:02.848865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.810 [2024-07-15 09:40:02.848870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.810 [2024-07-15 09:40:02.848875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.810 [2024-07-15 09:40:02.848886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.810 qpair failed and we were unable to recover it. 00:31:15.810 [2024-07-15 09:40:02.858844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.810 [2024-07-15 09:40:02.858933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.810 [2024-07-15 09:40:02.858944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.810 [2024-07-15 09:40:02.858949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.810 [2024-07-15 09:40:02.858954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.810 [2024-07-15 09:40:02.858965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.810 qpair failed and we were unable to recover it. 00:31:15.810 [2024-07-15 09:40:02.868855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.810 [2024-07-15 09:40:02.868946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.810 [2024-07-15 09:40:02.868960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.810 [2024-07-15 09:40:02.868965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.810 [2024-07-15 09:40:02.868969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.810 [2024-07-15 09:40:02.868980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.810 qpair failed and we were unable to recover it. 00:31:15.810 [2024-07-15 09:40:02.878895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.810 [2024-07-15 09:40:02.878942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.810 [2024-07-15 09:40:02.878953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.810 [2024-07-15 09:40:02.878958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.810 [2024-07-15 09:40:02.878962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.810 [2024-07-15 09:40:02.878973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.810 qpair failed and we were unable to recover it. 00:31:15.810 [2024-07-15 09:40:02.888827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.810 [2024-07-15 09:40:02.888876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.810 [2024-07-15 09:40:02.888887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.810 [2024-07-15 09:40:02.888892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.810 [2024-07-15 09:40:02.888897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.810 [2024-07-15 09:40:02.888907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.810 qpair failed and we were unable to recover it. 00:31:15.810 [2024-07-15 09:40:02.898981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.810 [2024-07-15 09:40:02.899037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.810 [2024-07-15 09:40:02.899048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.810 [2024-07-15 09:40:02.899053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.810 [2024-07-15 09:40:02.899058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.810 [2024-07-15 09:40:02.899068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.810 qpair failed and we were unable to recover it. 00:31:15.811 [2024-07-15 09:40:02.909012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.811 [2024-07-15 09:40:02.909066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.811 [2024-07-15 09:40:02.909077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.811 [2024-07-15 09:40:02.909082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.811 [2024-07-15 09:40:02.909086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.811 [2024-07-15 09:40:02.909100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.811 qpair failed and we were unable to recover it. 00:31:15.811 [2024-07-15 09:40:02.919011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.811 [2024-07-15 09:40:02.919066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.811 [2024-07-15 09:40:02.919077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.811 [2024-07-15 09:40:02.919082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.811 [2024-07-15 09:40:02.919087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.811 [2024-07-15 09:40:02.919097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.811 qpair failed and we were unable to recover it. 00:31:15.811 [2024-07-15 09:40:02.929047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.811 [2024-07-15 09:40:02.929099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.811 [2024-07-15 09:40:02.929110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.811 [2024-07-15 09:40:02.929115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.811 [2024-07-15 09:40:02.929120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.811 [2024-07-15 09:40:02.929131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.811 qpair failed and we were unable to recover it. 00:31:15.811 [2024-07-15 09:40:02.938999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.811 [2024-07-15 09:40:02.939049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.811 [2024-07-15 09:40:02.939060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.811 [2024-07-15 09:40:02.939065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.811 [2024-07-15 09:40:02.939069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.811 [2024-07-15 09:40:02.939080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.811 qpair failed and we were unable to recover it. 00:31:15.811 [2024-07-15 09:40:02.949098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.811 [2024-07-15 09:40:02.949160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.811 [2024-07-15 09:40:02.949170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.811 [2024-07-15 09:40:02.949176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.811 [2024-07-15 09:40:02.949180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.811 [2024-07-15 09:40:02.949190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.811 qpair failed and we were unable to recover it. 00:31:15.811 [2024-07-15 09:40:02.959016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.811 [2024-07-15 09:40:02.959063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.811 [2024-07-15 09:40:02.959077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.811 [2024-07-15 09:40:02.959082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.811 [2024-07-15 09:40:02.959087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.811 [2024-07-15 09:40:02.959097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.811 qpair failed and we were unable to recover it. 00:31:15.811 [2024-07-15 09:40:02.969147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.811 [2024-07-15 09:40:02.969194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.811 [2024-07-15 09:40:02.969205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.811 [2024-07-15 09:40:02.969210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.811 [2024-07-15 09:40:02.969214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.811 [2024-07-15 09:40:02.969225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.811 qpair failed and we were unable to recover it. 00:31:15.811 [2024-07-15 09:40:02.979176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.811 [2024-07-15 09:40:02.979227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.811 [2024-07-15 09:40:02.979238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.811 [2024-07-15 09:40:02.979243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.811 [2024-07-15 09:40:02.979247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.811 [2024-07-15 09:40:02.979258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.811 qpair failed and we were unable to recover it. 00:31:15.811 [2024-07-15 09:40:02.989102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.811 [2024-07-15 09:40:02.989157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.811 [2024-07-15 09:40:02.989169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.811 [2024-07-15 09:40:02.989174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.811 [2024-07-15 09:40:02.989178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.811 [2024-07-15 09:40:02.989189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.811 qpair failed and we were unable to recover it. 00:31:15.811 [2024-07-15 09:40:02.999109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.811 [2024-07-15 09:40:02.999165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.811 [2024-07-15 09:40:02.999176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.811 [2024-07-15 09:40:02.999181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.811 [2024-07-15 09:40:02.999186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:15.811 [2024-07-15 09:40:02.999200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.811 qpair failed and we were unable to recover it. 00:31:16.073 [2024-07-15 09:40:03.009322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.073 [2024-07-15 09:40:03.009370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.073 [2024-07-15 09:40:03.009381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.073 [2024-07-15 09:40:03.009386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.073 [2024-07-15 09:40:03.009391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.073 [2024-07-15 09:40:03.009401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.073 qpair failed and we were unable to recover it. 00:31:16.073 [2024-07-15 09:40:03.019263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.073 [2024-07-15 09:40:03.019311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.073 [2024-07-15 09:40:03.019322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.074 [2024-07-15 09:40:03.019327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.074 [2024-07-15 09:40:03.019332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.074 [2024-07-15 09:40:03.019342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.074 qpair failed and we were unable to recover it. 00:31:16.074 [2024-07-15 09:40:03.029323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.074 [2024-07-15 09:40:03.029377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.074 [2024-07-15 09:40:03.029389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.074 [2024-07-15 09:40:03.029394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.074 [2024-07-15 09:40:03.029398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.074 [2024-07-15 09:40:03.029411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.074 qpair failed and we were unable to recover it. 00:31:16.074 [2024-07-15 09:40:03.039328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.074 [2024-07-15 09:40:03.039414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.074 [2024-07-15 09:40:03.039425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.074 [2024-07-15 09:40:03.039430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.074 [2024-07-15 09:40:03.039435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.074 [2024-07-15 09:40:03.039446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.074 qpair failed and we were unable to recover it. 00:31:16.074 [2024-07-15 09:40:03.049375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.074 [2024-07-15 09:40:03.049425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.074 [2024-07-15 09:40:03.049439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.074 [2024-07-15 09:40:03.049444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.074 [2024-07-15 09:40:03.049449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.074 [2024-07-15 09:40:03.049459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.074 qpair failed and we were unable to recover it. 00:31:16.074 [2024-07-15 09:40:03.059412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.074 [2024-07-15 09:40:03.059468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.074 [2024-07-15 09:40:03.059479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.074 [2024-07-15 09:40:03.059485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.074 [2024-07-15 09:40:03.059489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.074 [2024-07-15 09:40:03.059500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.074 qpair failed and we were unable to recover it. 00:31:16.074 [2024-07-15 09:40:03.069298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.074 [2024-07-15 09:40:03.069356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.074 [2024-07-15 09:40:03.069368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.074 [2024-07-15 09:40:03.069373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.074 [2024-07-15 09:40:03.069378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.074 [2024-07-15 09:40:03.069389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.074 qpair failed and we were unable to recover it. 00:31:16.074 [2024-07-15 09:40:03.079453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.074 [2024-07-15 09:40:03.079501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.074 [2024-07-15 09:40:03.079512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.074 [2024-07-15 09:40:03.079518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.074 [2024-07-15 09:40:03.079522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.074 [2024-07-15 09:40:03.079533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.074 qpair failed and we were unable to recover it. 00:31:16.074 [2024-07-15 09:40:03.089473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.074 [2024-07-15 09:40:03.089521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.074 [2024-07-15 09:40:03.089533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.074 [2024-07-15 09:40:03.089538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.074 [2024-07-15 09:40:03.089545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.074 [2024-07-15 09:40:03.089555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.074 qpair failed and we were unable to recover it. 00:31:16.074 [2024-07-15 09:40:03.099525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.074 [2024-07-15 09:40:03.099575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.074 [2024-07-15 09:40:03.099586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.074 [2024-07-15 09:40:03.099591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.074 [2024-07-15 09:40:03.099596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.074 [2024-07-15 09:40:03.099606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.074 qpair failed and we were unable to recover it. 00:31:16.074 [2024-07-15 09:40:03.109432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.074 [2024-07-15 09:40:03.109486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.074 [2024-07-15 09:40:03.109496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.074 [2024-07-15 09:40:03.109501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.074 [2024-07-15 09:40:03.109506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.074 [2024-07-15 09:40:03.109516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.074 qpair failed and we were unable to recover it. 00:31:16.074 [2024-07-15 09:40:03.119568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.075 [2024-07-15 09:40:03.119624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.075 [2024-07-15 09:40:03.119634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.075 [2024-07-15 09:40:03.119639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.075 [2024-07-15 09:40:03.119644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.075 [2024-07-15 09:40:03.119654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.075 qpair failed and we were unable to recover it. 00:31:16.075 [2024-07-15 09:40:03.129590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.075 [2024-07-15 09:40:03.129634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.075 [2024-07-15 09:40:03.129645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.075 [2024-07-15 09:40:03.129650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.075 [2024-07-15 09:40:03.129654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.075 [2024-07-15 09:40:03.129664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.075 qpair failed and we were unable to recover it. 00:31:16.075 [2024-07-15 09:40:03.139586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.075 [2024-07-15 09:40:03.139691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.075 [2024-07-15 09:40:03.139702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.075 [2024-07-15 09:40:03.139708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.075 [2024-07-15 09:40:03.139712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.075 [2024-07-15 09:40:03.139723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.075 qpair failed and we were unable to recover it. 00:31:16.075 [2024-07-15 09:40:03.149530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.075 [2024-07-15 09:40:03.149626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.075 [2024-07-15 09:40:03.149637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.075 [2024-07-15 09:40:03.149642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.075 [2024-07-15 09:40:03.149647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.075 [2024-07-15 09:40:03.149657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.075 qpair failed and we were unable to recover it. 00:31:16.075 [2024-07-15 09:40:03.159670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.075 [2024-07-15 09:40:03.159720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.075 [2024-07-15 09:40:03.159731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.075 [2024-07-15 09:40:03.159736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.075 [2024-07-15 09:40:03.159740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.075 [2024-07-15 09:40:03.159754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.075 qpair failed and we were unable to recover it. 00:31:16.075 [2024-07-15 09:40:03.169686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.075 [2024-07-15 09:40:03.169732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.075 [2024-07-15 09:40:03.169743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.075 [2024-07-15 09:40:03.169748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.075 [2024-07-15 09:40:03.169756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.075 [2024-07-15 09:40:03.169766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.075 qpair failed and we were unable to recover it. 00:31:16.075 [2024-07-15 09:40:03.179732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.075 [2024-07-15 09:40:03.179786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.075 [2024-07-15 09:40:03.179797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.075 [2024-07-15 09:40:03.179805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.075 [2024-07-15 09:40:03.179809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.075 [2024-07-15 09:40:03.179820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.075 qpair failed and we were unable to recover it. 00:31:16.075 [2024-07-15 09:40:03.189756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.075 [2024-07-15 09:40:03.189811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.075 [2024-07-15 09:40:03.189822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.075 [2024-07-15 09:40:03.189827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.075 [2024-07-15 09:40:03.189832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.075 [2024-07-15 09:40:03.189842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.075 qpair failed and we were unable to recover it. 00:31:16.075 [2024-07-15 09:40:03.199792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.075 [2024-07-15 09:40:03.199844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.075 [2024-07-15 09:40:03.199855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.075 [2024-07-15 09:40:03.199860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.075 [2024-07-15 09:40:03.199864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.075 [2024-07-15 09:40:03.199875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.075 qpair failed and we were unable to recover it. 00:31:16.075 [2024-07-15 09:40:03.209803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.075 [2024-07-15 09:40:03.209854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.075 [2024-07-15 09:40:03.209864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.075 [2024-07-15 09:40:03.209869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.075 [2024-07-15 09:40:03.209874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.075 [2024-07-15 09:40:03.209884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.075 qpair failed and we were unable to recover it. 00:31:16.075 [2024-07-15 09:40:03.219841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.075 [2024-07-15 09:40:03.219892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.075 [2024-07-15 09:40:03.219902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.075 [2024-07-15 09:40:03.219907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.075 [2024-07-15 09:40:03.219912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.075 [2024-07-15 09:40:03.219922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.075 qpair failed and we were unable to recover it. 00:31:16.075 [2024-07-15 09:40:03.229864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.075 [2024-07-15 09:40:03.229923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.075 [2024-07-15 09:40:03.229934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.075 [2024-07-15 09:40:03.229939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.075 [2024-07-15 09:40:03.229944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.075 [2024-07-15 09:40:03.229954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.075 qpair failed and we were unable to recover it. 00:31:16.076 [2024-07-15 09:40:03.239773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.076 [2024-07-15 09:40:03.239823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.076 [2024-07-15 09:40:03.239834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.076 [2024-07-15 09:40:03.239840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.076 [2024-07-15 09:40:03.239844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.076 [2024-07-15 09:40:03.239855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.076 qpair failed and we were unable to recover it. 00:31:16.076 [2024-07-15 09:40:03.249927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.076 [2024-07-15 09:40:03.249973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.076 [2024-07-15 09:40:03.249984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.076 [2024-07-15 09:40:03.249989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.076 [2024-07-15 09:40:03.249994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.076 [2024-07-15 09:40:03.250004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.076 qpair failed and we were unable to recover it. 00:31:16.076 [2024-07-15 09:40:03.260018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.076 [2024-07-15 09:40:03.260116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.076 [2024-07-15 09:40:03.260126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.076 [2024-07-15 09:40:03.260132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.076 [2024-07-15 09:40:03.260136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.076 [2024-07-15 09:40:03.260147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.076 qpair failed and we were unable to recover it. 00:31:16.076 [2024-07-15 09:40:03.269981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.076 [2024-07-15 09:40:03.270086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.076 [2024-07-15 09:40:03.270098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.076 [2024-07-15 09:40:03.270105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.076 [2024-07-15 09:40:03.270110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.076 [2024-07-15 09:40:03.270120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.076 qpair failed and we were unable to recover it. 00:31:16.338 [2024-07-15 09:40:03.280011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.338 [2024-07-15 09:40:03.280064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.338 [2024-07-15 09:40:03.280075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.338 [2024-07-15 09:40:03.280080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.338 [2024-07-15 09:40:03.280084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.338 [2024-07-15 09:40:03.280094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.338 qpair failed and we were unable to recover it. 00:31:16.338 [2024-07-15 09:40:03.289938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.338 [2024-07-15 09:40:03.290034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.338 [2024-07-15 09:40:03.290045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.338 [2024-07-15 09:40:03.290051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.338 [2024-07-15 09:40:03.290055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.338 [2024-07-15 09:40:03.290065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.338 qpair failed and we were unable to recover it. 00:31:16.338 [2024-07-15 09:40:03.299949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.338 [2024-07-15 09:40:03.300000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.338 [2024-07-15 09:40:03.300011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.338 [2024-07-15 09:40:03.300016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.338 [2024-07-15 09:40:03.300021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.338 [2024-07-15 09:40:03.300031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.338 qpair failed and we were unable to recover it. 00:31:16.338 [2024-07-15 09:40:03.310071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.338 [2024-07-15 09:40:03.310125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.338 [2024-07-15 09:40:03.310136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.338 [2024-07-15 09:40:03.310141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.338 [2024-07-15 09:40:03.310146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.338 [2024-07-15 09:40:03.310156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.338 qpair failed and we were unable to recover it. 00:31:16.338 [2024-07-15 09:40:03.320002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.338 [2024-07-15 09:40:03.320057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.338 [2024-07-15 09:40:03.320069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.338 [2024-07-15 09:40:03.320074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.338 [2024-07-15 09:40:03.320078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.338 [2024-07-15 09:40:03.320089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.338 qpair failed and we were unable to recover it. 00:31:16.338 [2024-07-15 09:40:03.330141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.338 [2024-07-15 09:40:03.330191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.338 [2024-07-15 09:40:03.330201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.338 [2024-07-15 09:40:03.330206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.338 [2024-07-15 09:40:03.330210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.338 [2024-07-15 09:40:03.330220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.338 qpair failed and we were unable to recover it. 00:31:16.338 [2024-07-15 09:40:03.340176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.338 [2024-07-15 09:40:03.340228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.338 [2024-07-15 09:40:03.340239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.338 [2024-07-15 09:40:03.340244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.339 [2024-07-15 09:40:03.340248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.339 [2024-07-15 09:40:03.340259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.339 qpair failed and we were unable to recover it. 00:31:16.339 [2024-07-15 09:40:03.350113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.339 [2024-07-15 09:40:03.350215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.339 [2024-07-15 09:40:03.350227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.339 [2024-07-15 09:40:03.350232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.339 [2024-07-15 09:40:03.350237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.339 [2024-07-15 09:40:03.350247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.339 qpair failed and we were unable to recover it. 00:31:16.339 [2024-07-15 09:40:03.360236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.339 [2024-07-15 09:40:03.360289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.339 [2024-07-15 09:40:03.360303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.339 [2024-07-15 09:40:03.360308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.339 [2024-07-15 09:40:03.360312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.339 [2024-07-15 09:40:03.360323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.339 qpair failed and we were unable to recover it. 00:31:16.339 [2024-07-15 09:40:03.370252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.339 [2024-07-15 09:40:03.370316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.339 [2024-07-15 09:40:03.370329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.339 [2024-07-15 09:40:03.370334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.339 [2024-07-15 09:40:03.370339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.339 [2024-07-15 09:40:03.370350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.339 qpair failed and we were unable to recover it. 00:31:16.339 [2024-07-15 09:40:03.380212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.339 [2024-07-15 09:40:03.380270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.339 [2024-07-15 09:40:03.380281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.339 [2024-07-15 09:40:03.380286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.339 [2024-07-15 09:40:03.380290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.339 [2024-07-15 09:40:03.380301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.339 qpair failed and we were unable to recover it. 00:31:16.339 [2024-07-15 09:40:03.390317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.339 [2024-07-15 09:40:03.390369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.339 [2024-07-15 09:40:03.390380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.339 [2024-07-15 09:40:03.390385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.339 [2024-07-15 09:40:03.390390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.339 [2024-07-15 09:40:03.390400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.339 qpair failed and we were unable to recover it. 00:31:16.339 [2024-07-15 09:40:03.400314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.339 [2024-07-15 09:40:03.400364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.339 [2024-07-15 09:40:03.400375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.339 [2024-07-15 09:40:03.400380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.339 [2024-07-15 09:40:03.400385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.339 [2024-07-15 09:40:03.400398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.339 qpair failed and we were unable to recover it. 00:31:16.339 [2024-07-15 09:40:03.410359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.339 [2024-07-15 09:40:03.410408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.339 [2024-07-15 09:40:03.410419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.339 [2024-07-15 09:40:03.410424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.339 [2024-07-15 09:40:03.410429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.339 [2024-07-15 09:40:03.410439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.339 qpair failed and we were unable to recover it. 00:31:16.339 [2024-07-15 09:40:03.420322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.339 [2024-07-15 09:40:03.420424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.339 [2024-07-15 09:40:03.420436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.339 [2024-07-15 09:40:03.420441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.339 [2024-07-15 09:40:03.420447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.339 [2024-07-15 09:40:03.420457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.339 qpair failed and we were unable to recover it. 00:31:16.339 [2024-07-15 09:40:03.430445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.339 [2024-07-15 09:40:03.430529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.339 [2024-07-15 09:40:03.430540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.339 [2024-07-15 09:40:03.430545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.339 [2024-07-15 09:40:03.430549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.339 [2024-07-15 09:40:03.430560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.339 qpair failed and we were unable to recover it. 00:31:16.339 [2024-07-15 09:40:03.440458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.339 [2024-07-15 09:40:03.440510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.339 [2024-07-15 09:40:03.440521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.339 [2024-07-15 09:40:03.440526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.339 [2024-07-15 09:40:03.440530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.339 [2024-07-15 09:40:03.440541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.339 qpair failed and we were unable to recover it. 00:31:16.339 [2024-07-15 09:40:03.450463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.339 [2024-07-15 09:40:03.450514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.339 [2024-07-15 09:40:03.450528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.339 [2024-07-15 09:40:03.450533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.339 [2024-07-15 09:40:03.450537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.339 [2024-07-15 09:40:03.450547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.339 qpair failed and we were unable to recover it. 00:31:16.339 [2024-07-15 09:40:03.460549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.339 [2024-07-15 09:40:03.460598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.339 [2024-07-15 09:40:03.460609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.339 [2024-07-15 09:40:03.460614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.339 [2024-07-15 09:40:03.460618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.339 [2024-07-15 09:40:03.460628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.339 qpair failed and we were unable to recover it. 00:31:16.339 [2024-07-15 09:40:03.470537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.339 [2024-07-15 09:40:03.470594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.339 [2024-07-15 09:40:03.470604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.339 [2024-07-15 09:40:03.470609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.339 [2024-07-15 09:40:03.470614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.339 [2024-07-15 09:40:03.470624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.339 qpair failed and we were unable to recover it. 00:31:16.339 [2024-07-15 09:40:03.480545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.339 [2024-07-15 09:40:03.480595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.339 [2024-07-15 09:40:03.480606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.339 [2024-07-15 09:40:03.480611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.339 [2024-07-15 09:40:03.480615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.340 [2024-07-15 09:40:03.480625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.340 qpair failed and we were unable to recover it. 00:31:16.340 [2024-07-15 09:40:03.490575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.340 [2024-07-15 09:40:03.490668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.340 [2024-07-15 09:40:03.490680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.340 [2024-07-15 09:40:03.490685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.340 [2024-07-15 09:40:03.490692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.340 [2024-07-15 09:40:03.490703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.340 qpair failed and we were unable to recover it. 00:31:16.340 [2024-07-15 09:40:03.500630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.340 [2024-07-15 09:40:03.500682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.340 [2024-07-15 09:40:03.500694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.340 [2024-07-15 09:40:03.500699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.340 [2024-07-15 09:40:03.500704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.340 [2024-07-15 09:40:03.500714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.340 qpair failed and we were unable to recover it. 00:31:16.340 [2024-07-15 09:40:03.510512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.340 [2024-07-15 09:40:03.510564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.340 [2024-07-15 09:40:03.510575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.340 [2024-07-15 09:40:03.510580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.340 [2024-07-15 09:40:03.510585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.340 [2024-07-15 09:40:03.510595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.340 qpair failed and we were unable to recover it. 00:31:16.340 [2024-07-15 09:40:03.520677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.340 [2024-07-15 09:40:03.520732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.340 [2024-07-15 09:40:03.520743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.340 [2024-07-15 09:40:03.520748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.340 [2024-07-15 09:40:03.520756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.340 [2024-07-15 09:40:03.520767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.340 qpair failed and we were unable to recover it. 00:31:16.340 [2024-07-15 09:40:03.530557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.340 [2024-07-15 09:40:03.530602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.340 [2024-07-15 09:40:03.530612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.340 [2024-07-15 09:40:03.530618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.340 [2024-07-15 09:40:03.530622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.340 [2024-07-15 09:40:03.530632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.340 qpair failed and we were unable to recover it. 00:31:16.604 [2024-07-15 09:40:03.540684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.604 [2024-07-15 09:40:03.540742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.604 [2024-07-15 09:40:03.540756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.604 [2024-07-15 09:40:03.540761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.604 [2024-07-15 09:40:03.540766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.604 [2024-07-15 09:40:03.540776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.604 qpair failed and we were unable to recover it. 00:31:16.604 [2024-07-15 09:40:03.550627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.604 [2024-07-15 09:40:03.550680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.604 [2024-07-15 09:40:03.550691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.604 [2024-07-15 09:40:03.550696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.604 [2024-07-15 09:40:03.550701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.604 [2024-07-15 09:40:03.550712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.604 qpair failed and we were unable to recover it. 00:31:16.604 [2024-07-15 09:40:03.560789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.604 [2024-07-15 09:40:03.560841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.604 [2024-07-15 09:40:03.560853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.604 [2024-07-15 09:40:03.560858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.604 [2024-07-15 09:40:03.560863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.604 [2024-07-15 09:40:03.560874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.604 qpair failed and we were unable to recover it. 00:31:16.604 [2024-07-15 09:40:03.570813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.604 [2024-07-15 09:40:03.570863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.604 [2024-07-15 09:40:03.570875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.604 [2024-07-15 09:40:03.570880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.604 [2024-07-15 09:40:03.570884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.604 [2024-07-15 09:40:03.570895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.604 qpair failed and we were unable to recover it. 00:31:16.604 [2024-07-15 09:40:03.580835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.604 [2024-07-15 09:40:03.580884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.604 [2024-07-15 09:40:03.580894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.604 [2024-07-15 09:40:03.580902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.604 [2024-07-15 09:40:03.580906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.604 [2024-07-15 09:40:03.580917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.604 qpair failed and we were unable to recover it. 00:31:16.604 [2024-07-15 09:40:03.590796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.604 [2024-07-15 09:40:03.590853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.604 [2024-07-15 09:40:03.590863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.604 [2024-07-15 09:40:03.590868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.604 [2024-07-15 09:40:03.590873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.604 [2024-07-15 09:40:03.590883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.604 qpair failed and we were unable to recover it. 00:31:16.604 [2024-07-15 09:40:03.600887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.604 [2024-07-15 09:40:03.600939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.604 [2024-07-15 09:40:03.600950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.604 [2024-07-15 09:40:03.600955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.604 [2024-07-15 09:40:03.600960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.604 [2024-07-15 09:40:03.600971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.604 qpair failed and we were unable to recover it. 00:31:16.604 [2024-07-15 09:40:03.610931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.604 [2024-07-15 09:40:03.610977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.604 [2024-07-15 09:40:03.610987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.604 [2024-07-15 09:40:03.610993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.604 [2024-07-15 09:40:03.610997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.604 [2024-07-15 09:40:03.611007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.604 qpair failed and we were unable to recover it. 00:31:16.604 [2024-07-15 09:40:03.620835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.604 [2024-07-15 09:40:03.620885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.604 [2024-07-15 09:40:03.620896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.604 [2024-07-15 09:40:03.620901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.604 [2024-07-15 09:40:03.620905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.604 [2024-07-15 09:40:03.620916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.604 qpair failed and we were unable to recover it. 00:31:16.604 [2024-07-15 09:40:03.630969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.604 [2024-07-15 09:40:03.631030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.604 [2024-07-15 09:40:03.631041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.604 [2024-07-15 09:40:03.631046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.604 [2024-07-15 09:40:03.631050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.604 [2024-07-15 09:40:03.631061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.604 qpair failed and we were unable to recover it. 00:31:16.604 [2024-07-15 09:40:03.640890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.604 [2024-07-15 09:40:03.640944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.604 [2024-07-15 09:40:03.640956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.604 [2024-07-15 09:40:03.640961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.604 [2024-07-15 09:40:03.640965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.604 [2024-07-15 09:40:03.640975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.604 qpair failed and we were unable to recover it. 00:31:16.604 [2024-07-15 09:40:03.651056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.604 [2024-07-15 09:40:03.651135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.604 [2024-07-15 09:40:03.651146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.604 [2024-07-15 09:40:03.651151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.604 [2024-07-15 09:40:03.651155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.604 [2024-07-15 09:40:03.651166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.604 qpair failed and we were unable to recover it. 00:31:16.604 [2024-07-15 09:40:03.661079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.604 [2024-07-15 09:40:03.661127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.604 [2024-07-15 09:40:03.661137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.604 [2024-07-15 09:40:03.661142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.604 [2024-07-15 09:40:03.661146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.604 [2024-07-15 09:40:03.661157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.604 qpair failed and we were unable to recover it. 00:31:16.604 [2024-07-15 09:40:03.671108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.604 [2024-07-15 09:40:03.671161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.604 [2024-07-15 09:40:03.671172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.604 [2024-07-15 09:40:03.671179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.604 [2024-07-15 09:40:03.671184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.605 [2024-07-15 09:40:03.671194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.605 qpair failed and we were unable to recover it. 00:31:16.605 [2024-07-15 09:40:03.681003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.605 [2024-07-15 09:40:03.681049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.605 [2024-07-15 09:40:03.681060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.605 [2024-07-15 09:40:03.681066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.605 [2024-07-15 09:40:03.681070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.605 [2024-07-15 09:40:03.681080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.605 qpair failed and we were unable to recover it. 00:31:16.605 [2024-07-15 09:40:03.691014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.605 [2024-07-15 09:40:03.691064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.605 [2024-07-15 09:40:03.691075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.605 [2024-07-15 09:40:03.691080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.605 [2024-07-15 09:40:03.691085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.605 [2024-07-15 09:40:03.691095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.605 qpair failed and we were unable to recover it. 00:31:16.605 [2024-07-15 09:40:03.701133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.605 [2024-07-15 09:40:03.701186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.605 [2024-07-15 09:40:03.701197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.605 [2024-07-15 09:40:03.701202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.605 [2024-07-15 09:40:03.701206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.605 [2024-07-15 09:40:03.701217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.605 qpair failed and we were unable to recover it. 00:31:16.605 [2024-07-15 09:40:03.711216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.605 [2024-07-15 09:40:03.711274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.605 [2024-07-15 09:40:03.711285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.605 [2024-07-15 09:40:03.711291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.605 [2024-07-15 09:40:03.711295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.605 [2024-07-15 09:40:03.711305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.605 qpair failed and we were unable to recover it. 00:31:16.605 [2024-07-15 09:40:03.721227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.605 [2024-07-15 09:40:03.721298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.605 [2024-07-15 09:40:03.721309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.605 [2024-07-15 09:40:03.721314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.605 [2024-07-15 09:40:03.721318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.605 [2024-07-15 09:40:03.721328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.605 qpair failed and we were unable to recover it. 00:31:16.605 [2024-07-15 09:40:03.731175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.605 [2024-07-15 09:40:03.731223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.605 [2024-07-15 09:40:03.731233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.605 [2024-07-15 09:40:03.731238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.605 [2024-07-15 09:40:03.731243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.605 [2024-07-15 09:40:03.731253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.605 qpair failed and we were unable to recover it. 00:31:16.605 [2024-07-15 09:40:03.741286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.605 [2024-07-15 09:40:03.741337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.605 [2024-07-15 09:40:03.741348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.605 [2024-07-15 09:40:03.741353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.605 [2024-07-15 09:40:03.741357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.605 [2024-07-15 09:40:03.741367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.605 qpair failed and we were unable to recover it. 00:31:16.605 [2024-07-15 09:40:03.751191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.605 [2024-07-15 09:40:03.751248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.605 [2024-07-15 09:40:03.751259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.605 [2024-07-15 09:40:03.751264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.605 [2024-07-15 09:40:03.751268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.605 [2024-07-15 09:40:03.751278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.605 qpair failed and we were unable to recover it. 00:31:16.605 [2024-07-15 09:40:03.761332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.605 [2024-07-15 09:40:03.761383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.605 [2024-07-15 09:40:03.761397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.605 [2024-07-15 09:40:03.761402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.605 [2024-07-15 09:40:03.761407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.605 [2024-07-15 09:40:03.761417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.605 qpair failed and we were unable to recover it. 00:31:16.605 [2024-07-15 09:40:03.771367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.605 [2024-07-15 09:40:03.771418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.605 [2024-07-15 09:40:03.771428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.605 [2024-07-15 09:40:03.771434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.605 [2024-07-15 09:40:03.771438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.605 [2024-07-15 09:40:03.771448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.605 qpair failed and we were unable to recover it. 00:31:16.605 [2024-07-15 09:40:03.781431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.605 [2024-07-15 09:40:03.781481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.605 [2024-07-15 09:40:03.781492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.605 [2024-07-15 09:40:03.781497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.605 [2024-07-15 09:40:03.781501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.605 [2024-07-15 09:40:03.781511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.605 qpair failed and we were unable to recover it. 00:31:16.605 [2024-07-15 09:40:03.791461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.605 [2024-07-15 09:40:03.791516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.605 [2024-07-15 09:40:03.791527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.605 [2024-07-15 09:40:03.791532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.605 [2024-07-15 09:40:03.791536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.605 [2024-07-15 09:40:03.791546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.605 qpair failed and we were unable to recover it. 00:31:16.868 [2024-07-15 09:40:03.801307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.868 [2024-07-15 09:40:03.801352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.868 [2024-07-15 09:40:03.801363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.868 [2024-07-15 09:40:03.801369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.868 [2024-07-15 09:40:03.801373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.868 [2024-07-15 09:40:03.801387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.868 qpair failed and we were unable to recover it. 00:31:16.868 [2024-07-15 09:40:03.811485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.868 [2024-07-15 09:40:03.811560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.868 [2024-07-15 09:40:03.811571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.868 [2024-07-15 09:40:03.811576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.868 [2024-07-15 09:40:03.811580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.868 [2024-07-15 09:40:03.811591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.868 qpair failed and we were unable to recover it. 00:31:16.868 [2024-07-15 09:40:03.821356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.868 [2024-07-15 09:40:03.821404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.868 [2024-07-15 09:40:03.821422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.868 [2024-07-15 09:40:03.821428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.868 [2024-07-15 09:40:03.821433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.868 [2024-07-15 09:40:03.821446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.868 qpair failed and we were unable to recover it. 00:31:16.868 [2024-07-15 09:40:03.831565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.868 [2024-07-15 09:40:03.831657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.868 [2024-07-15 09:40:03.831675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.868 [2024-07-15 09:40:03.831681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.868 [2024-07-15 09:40:03.831686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.868 [2024-07-15 09:40:03.831699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.868 qpair failed and we were unable to recover it. 00:31:16.868 [2024-07-15 09:40:03.841557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.869 [2024-07-15 09:40:03.841604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.869 [2024-07-15 09:40:03.841616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.869 [2024-07-15 09:40:03.841621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.869 [2024-07-15 09:40:03.841626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.869 [2024-07-15 09:40:03.841637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.869 qpair failed and we were unable to recover it. 00:31:16.869 [2024-07-15 09:40:03.851585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.869 [2024-07-15 09:40:03.851628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.869 [2024-07-15 09:40:03.851643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.869 [2024-07-15 09:40:03.851648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.869 [2024-07-15 09:40:03.851652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.869 [2024-07-15 09:40:03.851663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.869 qpair failed and we were unable to recover it. 00:31:16.869 [2024-07-15 09:40:03.861592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.869 [2024-07-15 09:40:03.861638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.869 [2024-07-15 09:40:03.861648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.869 [2024-07-15 09:40:03.861653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.869 [2024-07-15 09:40:03.861658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.869 [2024-07-15 09:40:03.861668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.869 qpair failed and we were unable to recover it. 00:31:16.869 [2024-07-15 09:40:03.871634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.869 [2024-07-15 09:40:03.871684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.869 [2024-07-15 09:40:03.871694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.869 [2024-07-15 09:40:03.871699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.869 [2024-07-15 09:40:03.871704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.869 [2024-07-15 09:40:03.871714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.869 qpair failed and we were unable to recover it. 00:31:16.869 [2024-07-15 09:40:03.881663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.869 [2024-07-15 09:40:03.881707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.869 [2024-07-15 09:40:03.881718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.869 [2024-07-15 09:40:03.881723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.869 [2024-07-15 09:40:03.881727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.869 [2024-07-15 09:40:03.881738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.869 qpair failed and we were unable to recover it. 00:31:16.869 [2024-07-15 09:40:03.891599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.869 [2024-07-15 09:40:03.891653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.869 [2024-07-15 09:40:03.891664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.869 [2024-07-15 09:40:03.891669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.869 [2024-07-15 09:40:03.891676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.869 [2024-07-15 09:40:03.891686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.869 qpair failed and we were unable to recover it. 00:31:16.869 [2024-07-15 09:40:03.901659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.869 [2024-07-15 09:40:03.901703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.869 [2024-07-15 09:40:03.901714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.869 [2024-07-15 09:40:03.901719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.869 [2024-07-15 09:40:03.901723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.869 [2024-07-15 09:40:03.901734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.869 qpair failed and we were unable to recover it. 00:31:16.869 [2024-07-15 09:40:03.911754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.869 [2024-07-15 09:40:03.911802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.869 [2024-07-15 09:40:03.911814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.869 [2024-07-15 09:40:03.911819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.869 [2024-07-15 09:40:03.911824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.869 [2024-07-15 09:40:03.911834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.869 qpair failed and we were unable to recover it. 00:31:16.869 [2024-07-15 09:40:03.921805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.869 [2024-07-15 09:40:03.921897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.869 [2024-07-15 09:40:03.921908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.869 [2024-07-15 09:40:03.921913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.869 [2024-07-15 09:40:03.921918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.869 [2024-07-15 09:40:03.921928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.869 qpair failed and we were unable to recover it. 00:31:16.869 [2024-07-15 09:40:03.931813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.869 [2024-07-15 09:40:03.931863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.869 [2024-07-15 09:40:03.931873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.869 [2024-07-15 09:40:03.931878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.869 [2024-07-15 09:40:03.931883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.869 [2024-07-15 09:40:03.931893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.869 qpair failed and we were unable to recover it. 00:31:16.869 [2024-07-15 09:40:03.941794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.869 [2024-07-15 09:40:03.941851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.869 [2024-07-15 09:40:03.941862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.869 [2024-07-15 09:40:03.941867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.869 [2024-07-15 09:40:03.941872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.869 [2024-07-15 09:40:03.941882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.869 qpair failed and we were unable to recover it. 00:31:16.869 [2024-07-15 09:40:03.951845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.869 [2024-07-15 09:40:03.951896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.869 [2024-07-15 09:40:03.951907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.869 [2024-07-15 09:40:03.951912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.869 [2024-07-15 09:40:03.951916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.869 [2024-07-15 09:40:03.951927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.869 qpair failed and we were unable to recover it. 00:31:16.869 [2024-07-15 09:40:03.961891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.869 [2024-07-15 09:40:03.961938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.869 [2024-07-15 09:40:03.961949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.869 [2024-07-15 09:40:03.961954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.869 [2024-07-15 09:40:03.961959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.869 [2024-07-15 09:40:03.961969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.869 qpair failed and we were unable to recover it. 00:31:16.869 [2024-07-15 09:40:03.971791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.869 [2024-07-15 09:40:03.971838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.869 [2024-07-15 09:40:03.971848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.869 [2024-07-15 09:40:03.971853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.869 [2024-07-15 09:40:03.971858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.869 [2024-07-15 09:40:03.971868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.869 qpair failed and we were unable to recover it. 00:31:16.869 [2024-07-15 09:40:03.981886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.869 [2024-07-15 09:40:03.981932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.869 [2024-07-15 09:40:03.981942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.870 [2024-07-15 09:40:03.981948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.870 [2024-07-15 09:40:03.981958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.870 [2024-07-15 09:40:03.981968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.870 qpair failed and we were unable to recover it. 00:31:16.870 [2024-07-15 09:40:03.991927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.870 [2024-07-15 09:40:03.991984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.870 [2024-07-15 09:40:03.991996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.870 [2024-07-15 09:40:03.992001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.870 [2024-07-15 09:40:03.992005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.870 [2024-07-15 09:40:03.992016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.870 qpair failed and we were unable to recover it. 00:31:16.870 [2024-07-15 09:40:04.001984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.870 [2024-07-15 09:40:04.002032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.870 [2024-07-15 09:40:04.002043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.870 [2024-07-15 09:40:04.002048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.870 [2024-07-15 09:40:04.002052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.870 [2024-07-15 09:40:04.002063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.870 qpair failed and we were unable to recover it. 00:31:16.870 [2024-07-15 09:40:04.012027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.870 [2024-07-15 09:40:04.012074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.870 [2024-07-15 09:40:04.012085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.870 [2024-07-15 09:40:04.012090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.870 [2024-07-15 09:40:04.012095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.870 [2024-07-15 09:40:04.012105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.870 qpair failed and we were unable to recover it. 00:31:16.870 [2024-07-15 09:40:04.022026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.870 [2024-07-15 09:40:04.022070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.870 [2024-07-15 09:40:04.022081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.870 [2024-07-15 09:40:04.022086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.870 [2024-07-15 09:40:04.022091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.870 [2024-07-15 09:40:04.022101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.870 qpair failed and we were unable to recover it. 00:31:16.870 [2024-07-15 09:40:04.032069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.870 [2024-07-15 09:40:04.032119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.870 [2024-07-15 09:40:04.032131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.870 [2024-07-15 09:40:04.032136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.870 [2024-07-15 09:40:04.032140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.870 [2024-07-15 09:40:04.032151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.870 qpair failed and we were unable to recover it. 00:31:16.870 [2024-07-15 09:40:04.042104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.870 [2024-07-15 09:40:04.042149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.870 [2024-07-15 09:40:04.042160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.870 [2024-07-15 09:40:04.042165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.870 [2024-07-15 09:40:04.042169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.870 [2024-07-15 09:40:04.042179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.870 qpair failed and we were unable to recover it. 00:31:16.870 [2024-07-15 09:40:04.052027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.870 [2024-07-15 09:40:04.052078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.870 [2024-07-15 09:40:04.052089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.870 [2024-07-15 09:40:04.052094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.870 [2024-07-15 09:40:04.052098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.870 [2024-07-15 09:40:04.052109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.870 qpair failed and we were unable to recover it. 00:31:16.870 [2024-07-15 09:40:04.062134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.870 [2024-07-15 09:40:04.062176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.870 [2024-07-15 09:40:04.062187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.870 [2024-07-15 09:40:04.062192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.870 [2024-07-15 09:40:04.062196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:16.870 [2024-07-15 09:40:04.062207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.870 qpair failed and we were unable to recover it. 00:31:17.133 [2024-07-15 09:40:04.072198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.133 [2024-07-15 09:40:04.072250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.133 [2024-07-15 09:40:04.072261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.133 [2024-07-15 09:40:04.072269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.133 [2024-07-15 09:40:04.072273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.133 [2024-07-15 09:40:04.072284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.133 qpair failed and we were unable to recover it. 00:31:17.133 [2024-07-15 09:40:04.082136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.133 [2024-07-15 09:40:04.082187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.133 [2024-07-15 09:40:04.082198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.133 [2024-07-15 09:40:04.082203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.133 [2024-07-15 09:40:04.082207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.133 [2024-07-15 09:40:04.082217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.133 qpair failed and we were unable to recover it. 00:31:17.133 [2024-07-15 09:40:04.092249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.133 [2024-07-15 09:40:04.092294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.133 [2024-07-15 09:40:04.092305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.133 [2024-07-15 09:40:04.092310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.133 [2024-07-15 09:40:04.092314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.133 [2024-07-15 09:40:04.092325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.133 qpair failed and we were unable to recover it. 00:31:17.133 [2024-07-15 09:40:04.102236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.133 [2024-07-15 09:40:04.102287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.133 [2024-07-15 09:40:04.102298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.134 [2024-07-15 09:40:04.102303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.134 [2024-07-15 09:40:04.102308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.134 [2024-07-15 09:40:04.102318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.134 qpair failed and we were unable to recover it. 00:31:17.134 [2024-07-15 09:40:04.112302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.134 [2024-07-15 09:40:04.112355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.134 [2024-07-15 09:40:04.112366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.134 [2024-07-15 09:40:04.112371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.134 [2024-07-15 09:40:04.112376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.134 [2024-07-15 09:40:04.112387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.134 qpair failed and we were unable to recover it. 00:31:17.134 [2024-07-15 09:40:04.122320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.134 [2024-07-15 09:40:04.122366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.134 [2024-07-15 09:40:04.122377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.134 [2024-07-15 09:40:04.122382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.134 [2024-07-15 09:40:04.122387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.134 [2024-07-15 09:40:04.122398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.134 qpair failed and we were unable to recover it. 00:31:17.134 [2024-07-15 09:40:04.132349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.134 [2024-07-15 09:40:04.132394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.134 [2024-07-15 09:40:04.132405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.134 [2024-07-15 09:40:04.132410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.134 [2024-07-15 09:40:04.132414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.134 [2024-07-15 09:40:04.132425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.134 qpair failed and we were unable to recover it. 00:31:17.134 [2024-07-15 09:40:04.142333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.134 [2024-07-15 09:40:04.142377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.134 [2024-07-15 09:40:04.142389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.134 [2024-07-15 09:40:04.142394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.134 [2024-07-15 09:40:04.142399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.134 [2024-07-15 09:40:04.142409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.134 qpair failed and we were unable to recover it. 00:31:17.134 [2024-07-15 09:40:04.152405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.134 [2024-07-15 09:40:04.152455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.134 [2024-07-15 09:40:04.152466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.134 [2024-07-15 09:40:04.152471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.134 [2024-07-15 09:40:04.152476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.134 [2024-07-15 09:40:04.152486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.134 qpair failed and we were unable to recover it. 00:31:17.134 [2024-07-15 09:40:04.162422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.134 [2024-07-15 09:40:04.162466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.134 [2024-07-15 09:40:04.162480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.134 [2024-07-15 09:40:04.162486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.134 [2024-07-15 09:40:04.162490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.134 [2024-07-15 09:40:04.162501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.134 qpair failed and we were unable to recover it. 00:31:17.134 [2024-07-15 09:40:04.172459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.134 [2024-07-15 09:40:04.172511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.134 [2024-07-15 09:40:04.172522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.134 [2024-07-15 09:40:04.172527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.134 [2024-07-15 09:40:04.172532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.134 [2024-07-15 09:40:04.172542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.134 qpair failed and we were unable to recover it. 00:31:17.134 [2024-07-15 09:40:04.182427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.134 [2024-07-15 09:40:04.182469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.134 [2024-07-15 09:40:04.182480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.134 [2024-07-15 09:40:04.182485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.134 [2024-07-15 09:40:04.182490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.134 [2024-07-15 09:40:04.182500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.134 qpair failed and we were unable to recover it. 00:31:17.134 [2024-07-15 09:40:04.192529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.134 [2024-07-15 09:40:04.192581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.134 [2024-07-15 09:40:04.192591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.134 [2024-07-15 09:40:04.192596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.134 [2024-07-15 09:40:04.192601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.134 [2024-07-15 09:40:04.192611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.134 qpair failed and we were unable to recover it. 00:31:17.134 [2024-07-15 09:40:04.202529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.134 [2024-07-15 09:40:04.202574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.134 [2024-07-15 09:40:04.202585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.134 [2024-07-15 09:40:04.202590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.134 [2024-07-15 09:40:04.202595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.134 [2024-07-15 09:40:04.202608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.134 qpair failed and we were unable to recover it. 00:31:17.134 [2024-07-15 09:40:04.212559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.134 [2024-07-15 09:40:04.212612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.134 [2024-07-15 09:40:04.212623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.134 [2024-07-15 09:40:04.212628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.134 [2024-07-15 09:40:04.212632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.134 [2024-07-15 09:40:04.212643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.134 qpair failed and we were unable to recover it. 00:31:17.134 [2024-07-15 09:40:04.222430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.134 [2024-07-15 09:40:04.222476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.134 [2024-07-15 09:40:04.222487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.134 [2024-07-15 09:40:04.222492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.134 [2024-07-15 09:40:04.222497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.134 [2024-07-15 09:40:04.222507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.134 qpair failed and we were unable to recover it. 00:31:17.134 [2024-07-15 09:40:04.232624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.134 [2024-07-15 09:40:04.232671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.134 [2024-07-15 09:40:04.232682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.134 [2024-07-15 09:40:04.232687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.134 [2024-07-15 09:40:04.232692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.134 [2024-07-15 09:40:04.232702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.134 qpair failed and we were unable to recover it. 00:31:17.134 [2024-07-15 09:40:04.242633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.134 [2024-07-15 09:40:04.242685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.134 [2024-07-15 09:40:04.242696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.134 [2024-07-15 09:40:04.242701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.134 [2024-07-15 09:40:04.242705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.134 [2024-07-15 09:40:04.242716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.134 qpair failed and we were unable to recover it. 00:31:17.135 [2024-07-15 09:40:04.252656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.135 [2024-07-15 09:40:04.252701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.135 [2024-07-15 09:40:04.252714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.135 [2024-07-15 09:40:04.252719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.135 [2024-07-15 09:40:04.252723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.135 [2024-07-15 09:40:04.252734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.135 qpair failed and we were unable to recover it. 00:31:17.135 [2024-07-15 09:40:04.262662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.135 [2024-07-15 09:40:04.262704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.135 [2024-07-15 09:40:04.262715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.135 [2024-07-15 09:40:04.262720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.135 [2024-07-15 09:40:04.262725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.135 [2024-07-15 09:40:04.262735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.135 qpair failed and we were unable to recover it. 00:31:17.135 [2024-07-15 09:40:04.272647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.135 [2024-07-15 09:40:04.272694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.135 [2024-07-15 09:40:04.272704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.135 [2024-07-15 09:40:04.272710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.135 [2024-07-15 09:40:04.272715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.135 [2024-07-15 09:40:04.272725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.135 qpair failed and we were unable to recover it. 00:31:17.135 [2024-07-15 09:40:04.282754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.135 [2024-07-15 09:40:04.282804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.135 [2024-07-15 09:40:04.282815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.135 [2024-07-15 09:40:04.282820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.135 [2024-07-15 09:40:04.282825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.135 [2024-07-15 09:40:04.282835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.135 qpair failed and we were unable to recover it. 00:31:17.135 [2024-07-15 09:40:04.292785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.135 [2024-07-15 09:40:04.292835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.135 [2024-07-15 09:40:04.292845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.135 [2024-07-15 09:40:04.292850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.135 [2024-07-15 09:40:04.292855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.135 [2024-07-15 09:40:04.292868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.135 qpair failed and we were unable to recover it. 00:31:17.135 [2024-07-15 09:40:04.302779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.135 [2024-07-15 09:40:04.302862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.135 [2024-07-15 09:40:04.302873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.135 [2024-07-15 09:40:04.302879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.135 [2024-07-15 09:40:04.302883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.135 [2024-07-15 09:40:04.302894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.135 qpair failed and we were unable to recover it. 00:31:17.135 [2024-07-15 09:40:04.312839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.135 [2024-07-15 09:40:04.312891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.135 [2024-07-15 09:40:04.312902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.135 [2024-07-15 09:40:04.312906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.135 [2024-07-15 09:40:04.312911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.135 [2024-07-15 09:40:04.312922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.135 qpair failed and we were unable to recover it. 00:31:17.135 [2024-07-15 09:40:04.322852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.135 [2024-07-15 09:40:04.322897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.135 [2024-07-15 09:40:04.322908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.135 [2024-07-15 09:40:04.322914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.135 [2024-07-15 09:40:04.322918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.135 [2024-07-15 09:40:04.322929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.135 qpair failed and we were unable to recover it. 00:31:17.397 [2024-07-15 09:40:04.332807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.397 [2024-07-15 09:40:04.332855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.397 [2024-07-15 09:40:04.332866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.397 [2024-07-15 09:40:04.332871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.397 [2024-07-15 09:40:04.332876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.397 [2024-07-15 09:40:04.332886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.397 qpair failed and we were unable to recover it. 00:31:17.397 [2024-07-15 09:40:04.342772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.397 [2024-07-15 09:40:04.342823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.397 [2024-07-15 09:40:04.342834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.397 [2024-07-15 09:40:04.342839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.397 [2024-07-15 09:40:04.342843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.397 [2024-07-15 09:40:04.342854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.397 qpair failed and we were unable to recover it. 00:31:17.397 [2024-07-15 09:40:04.352973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.397 [2024-07-15 09:40:04.353024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.397 [2024-07-15 09:40:04.353034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.397 [2024-07-15 09:40:04.353039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.397 [2024-07-15 09:40:04.353044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.397 [2024-07-15 09:40:04.353054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.397 qpair failed and we were unable to recover it. 00:31:17.397 [2024-07-15 09:40:04.362884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.397 [2024-07-15 09:40:04.362931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.397 [2024-07-15 09:40:04.362942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.397 [2024-07-15 09:40:04.362947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.397 [2024-07-15 09:40:04.362951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.397 [2024-07-15 09:40:04.362961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.397 qpair failed and we were unable to recover it. 00:31:17.397 [2024-07-15 09:40:04.372894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.397 [2024-07-15 09:40:04.372939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.397 [2024-07-15 09:40:04.372949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.397 [2024-07-15 09:40:04.372954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.397 [2024-07-15 09:40:04.372959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.398 [2024-07-15 09:40:04.372969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.398 qpair failed and we were unable to recover it. 00:31:17.398 [2024-07-15 09:40:04.382950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.398 [2024-07-15 09:40:04.382996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.398 [2024-07-15 09:40:04.383007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.398 [2024-07-15 09:40:04.383012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.398 [2024-07-15 09:40:04.383019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.398 [2024-07-15 09:40:04.383029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.398 qpair failed and we were unable to recover it. 00:31:17.398 [2024-07-15 09:40:04.393055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.398 [2024-07-15 09:40:04.393101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.398 [2024-07-15 09:40:04.393113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.398 [2024-07-15 09:40:04.393118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.398 [2024-07-15 09:40:04.393122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.398 [2024-07-15 09:40:04.393132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.398 qpair failed and we were unable to recover it. 00:31:17.398 [2024-07-15 09:40:04.403080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.398 [2024-07-15 09:40:04.403123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.398 [2024-07-15 09:40:04.403134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.398 [2024-07-15 09:40:04.403140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.398 [2024-07-15 09:40:04.403144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.398 [2024-07-15 09:40:04.403155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.398 qpair failed and we were unable to recover it. 00:31:17.398 [2024-07-15 09:40:04.413119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.398 [2024-07-15 09:40:04.413169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.398 [2024-07-15 09:40:04.413180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.398 [2024-07-15 09:40:04.413185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.398 [2024-07-15 09:40:04.413189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.398 [2024-07-15 09:40:04.413200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.398 qpair failed and we were unable to recover it. 00:31:17.398 [2024-07-15 09:40:04.422971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.398 [2024-07-15 09:40:04.423015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.398 [2024-07-15 09:40:04.423026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.398 [2024-07-15 09:40:04.423031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.398 [2024-07-15 09:40:04.423036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.398 [2024-07-15 09:40:04.423046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.398 qpair failed and we were unable to recover it. 00:31:17.398 [2024-07-15 09:40:04.433166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.398 [2024-07-15 09:40:04.433224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.398 [2024-07-15 09:40:04.433236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.398 [2024-07-15 09:40:04.433241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.398 [2024-07-15 09:40:04.433245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.398 [2024-07-15 09:40:04.433256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.398 qpair failed and we were unable to recover it. 00:31:17.398 [2024-07-15 09:40:04.443202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.398 [2024-07-15 09:40:04.443251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.398 [2024-07-15 09:40:04.443261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.398 [2024-07-15 09:40:04.443266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.398 [2024-07-15 09:40:04.443271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.398 [2024-07-15 09:40:04.443281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.398 qpair failed and we were unable to recover it. 00:31:17.398 [2024-07-15 09:40:04.453098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.398 [2024-07-15 09:40:04.453157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.398 [2024-07-15 09:40:04.453167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.398 [2024-07-15 09:40:04.453172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.398 [2024-07-15 09:40:04.453177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.398 [2024-07-15 09:40:04.453187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.398 qpair failed and we were unable to recover it. 00:31:17.398 [2024-07-15 09:40:04.463207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.398 [2024-07-15 09:40:04.463252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.398 [2024-07-15 09:40:04.463263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.398 [2024-07-15 09:40:04.463268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.398 [2024-07-15 09:40:04.463272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.398 [2024-07-15 09:40:04.463282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.398 qpair failed and we were unable to recover it. 00:31:17.398 [2024-07-15 09:40:04.473141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.398 [2024-07-15 09:40:04.473195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.398 [2024-07-15 09:40:04.473206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.398 [2024-07-15 09:40:04.473214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.398 [2024-07-15 09:40:04.473219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.398 [2024-07-15 09:40:04.473229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.398 qpair failed and we were unable to recover it. 00:31:17.398 [2024-07-15 09:40:04.483291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.398 [2024-07-15 09:40:04.483339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.398 [2024-07-15 09:40:04.483349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.398 [2024-07-15 09:40:04.483354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.398 [2024-07-15 09:40:04.483359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.398 [2024-07-15 09:40:04.483369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.398 qpair failed and we were unable to recover it. 00:31:17.398 [2024-07-15 09:40:04.493303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.398 [2024-07-15 09:40:04.493354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.398 [2024-07-15 09:40:04.493365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.398 [2024-07-15 09:40:04.493370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.398 [2024-07-15 09:40:04.493375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.398 [2024-07-15 09:40:04.493385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.398 qpair failed and we were unable to recover it. 00:31:17.398 [2024-07-15 09:40:04.503186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.398 [2024-07-15 09:40:04.503227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.398 [2024-07-15 09:40:04.503238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.398 [2024-07-15 09:40:04.503244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.398 [2024-07-15 09:40:04.503248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.398 [2024-07-15 09:40:04.503258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.398 qpair failed and we were unable to recover it. 00:31:17.398 [2024-07-15 09:40:04.513369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.398 [2024-07-15 09:40:04.513418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.398 [2024-07-15 09:40:04.513429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.398 [2024-07-15 09:40:04.513434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.398 [2024-07-15 09:40:04.513439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.398 [2024-07-15 09:40:04.513449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.398 qpair failed and we were unable to recover it. 00:31:17.398 [2024-07-15 09:40:04.523404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.399 [2024-07-15 09:40:04.523455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.399 [2024-07-15 09:40:04.523466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.399 [2024-07-15 09:40:04.523471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.399 [2024-07-15 09:40:04.523475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.399 [2024-07-15 09:40:04.523486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.399 qpair failed and we were unable to recover it. 00:31:17.399 [2024-07-15 09:40:04.533431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.399 [2024-07-15 09:40:04.533482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.399 [2024-07-15 09:40:04.533493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.399 [2024-07-15 09:40:04.533498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.399 [2024-07-15 09:40:04.533503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.399 [2024-07-15 09:40:04.533513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.399 qpair failed and we were unable to recover it. 00:31:17.399 [2024-07-15 09:40:04.543431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.399 [2024-07-15 09:40:04.543476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.399 [2024-07-15 09:40:04.543487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.399 [2024-07-15 09:40:04.543493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.399 [2024-07-15 09:40:04.543497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.399 [2024-07-15 09:40:04.543507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.399 qpair failed and we were unable to recover it. 00:31:17.399 [2024-07-15 09:40:04.553496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.399 [2024-07-15 09:40:04.553546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.399 [2024-07-15 09:40:04.553556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.399 [2024-07-15 09:40:04.553562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.399 [2024-07-15 09:40:04.553566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.399 [2024-07-15 09:40:04.553577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.399 qpair failed and we were unable to recover it. 00:31:17.399 [2024-07-15 09:40:04.563489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.399 [2024-07-15 09:40:04.563547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.399 [2024-07-15 09:40:04.563561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.399 [2024-07-15 09:40:04.563566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.399 [2024-07-15 09:40:04.563570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.399 [2024-07-15 09:40:04.563580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.399 qpair failed and we were unable to recover it. 00:31:17.399 [2024-07-15 09:40:04.573543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.399 [2024-07-15 09:40:04.573593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.399 [2024-07-15 09:40:04.573604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.399 [2024-07-15 09:40:04.573609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.399 [2024-07-15 09:40:04.573614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.399 [2024-07-15 09:40:04.573624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.399 qpair failed and we were unable to recover it. 00:31:17.399 [2024-07-15 09:40:04.583400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.399 [2024-07-15 09:40:04.583443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.399 [2024-07-15 09:40:04.583454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.399 [2024-07-15 09:40:04.583459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.399 [2024-07-15 09:40:04.583464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.399 [2024-07-15 09:40:04.583475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.399 qpair failed and we were unable to recover it. 00:31:17.399 [2024-07-15 09:40:04.593595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.399 [2024-07-15 09:40:04.593655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.399 [2024-07-15 09:40:04.593666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.399 [2024-07-15 09:40:04.593671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.399 [2024-07-15 09:40:04.593676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.399 [2024-07-15 09:40:04.593686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.399 qpair failed and we were unable to recover it. 00:31:17.661 [2024-07-15 09:40:04.603635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.661 [2024-07-15 09:40:04.603685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.661 [2024-07-15 09:40:04.603696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.661 [2024-07-15 09:40:04.603701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.661 [2024-07-15 09:40:04.603706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.661 [2024-07-15 09:40:04.603719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.661 qpair failed and we were unable to recover it. 00:31:17.661 [2024-07-15 09:40:04.613646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.661 [2024-07-15 09:40:04.613693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.661 [2024-07-15 09:40:04.613705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.661 [2024-07-15 09:40:04.613711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.661 [2024-07-15 09:40:04.613715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.661 [2024-07-15 09:40:04.613726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.661 qpair failed and we were unable to recover it. 00:31:17.661 [2024-07-15 09:40:04.623630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.661 [2024-07-15 09:40:04.623673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.661 [2024-07-15 09:40:04.623684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.662 [2024-07-15 09:40:04.623689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.662 [2024-07-15 09:40:04.623694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.662 [2024-07-15 09:40:04.623704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.662 qpair failed and we were unable to recover it. 00:31:17.662 [2024-07-15 09:40:04.633694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.662 [2024-07-15 09:40:04.633747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.662 [2024-07-15 09:40:04.633763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.662 [2024-07-15 09:40:04.633768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.662 [2024-07-15 09:40:04.633773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.662 [2024-07-15 09:40:04.633783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.662 qpair failed and we were unable to recover it. 00:31:17.662 [2024-07-15 09:40:04.643722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.662 [2024-07-15 09:40:04.643775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.662 [2024-07-15 09:40:04.643786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.662 [2024-07-15 09:40:04.643791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.662 [2024-07-15 09:40:04.643796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.662 [2024-07-15 09:40:04.643806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.662 qpair failed and we were unable to recover it. 00:31:17.662 [2024-07-15 09:40:04.653741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.662 [2024-07-15 09:40:04.653789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.662 [2024-07-15 09:40:04.653802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.662 [2024-07-15 09:40:04.653807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.662 [2024-07-15 09:40:04.653812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.662 [2024-07-15 09:40:04.653822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.662 qpair failed and we were unable to recover it. 00:31:17.662 [2024-07-15 09:40:04.663699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.662 [2024-07-15 09:40:04.663744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.662 [2024-07-15 09:40:04.663759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.662 [2024-07-15 09:40:04.663765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.662 [2024-07-15 09:40:04.663769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.662 [2024-07-15 09:40:04.663779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.662 qpair failed and we were unable to recover it. 00:31:17.662 [2024-07-15 09:40:04.673801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.662 [2024-07-15 09:40:04.673853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.662 [2024-07-15 09:40:04.673864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.662 [2024-07-15 09:40:04.673868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.662 [2024-07-15 09:40:04.673873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.662 [2024-07-15 09:40:04.673883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.662 qpair failed and we were unable to recover it. 00:31:17.662 [2024-07-15 09:40:04.683863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.662 [2024-07-15 09:40:04.683946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.662 [2024-07-15 09:40:04.683957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.662 [2024-07-15 09:40:04.683962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.662 [2024-07-15 09:40:04.683967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.662 [2024-07-15 09:40:04.683978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.662 qpair failed and we were unable to recover it. 00:31:17.662 [2024-07-15 09:40:04.693844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.662 [2024-07-15 09:40:04.693897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.662 [2024-07-15 09:40:04.693908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.662 [2024-07-15 09:40:04.693913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.662 [2024-07-15 09:40:04.693917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.662 [2024-07-15 09:40:04.693930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.662 qpair failed and we were unable to recover it. 00:31:17.662 [2024-07-15 09:40:04.703716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.662 [2024-07-15 09:40:04.703759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.662 [2024-07-15 09:40:04.703772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.662 [2024-07-15 09:40:04.703777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.662 [2024-07-15 09:40:04.703781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.662 [2024-07-15 09:40:04.703792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.662 qpair failed and we were unable to recover it. 00:31:17.662 [2024-07-15 09:40:04.713788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.662 [2024-07-15 09:40:04.713853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.662 [2024-07-15 09:40:04.713864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.662 [2024-07-15 09:40:04.713869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.662 [2024-07-15 09:40:04.713873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.662 [2024-07-15 09:40:04.713883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.662 qpair failed and we were unable to recover it. 00:31:17.662 [2024-07-15 09:40:04.723966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.662 [2024-07-15 09:40:04.724014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.662 [2024-07-15 09:40:04.724025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.662 [2024-07-15 09:40:04.724031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.662 [2024-07-15 09:40:04.724035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.662 [2024-07-15 09:40:04.724045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.662 qpair failed and we were unable to recover it. 00:31:17.662 [2024-07-15 09:40:04.733851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.662 [2024-07-15 09:40:04.733896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.662 [2024-07-15 09:40:04.733906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.662 [2024-07-15 09:40:04.733911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.662 [2024-07-15 09:40:04.733916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.662 [2024-07-15 09:40:04.733926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.662 qpair failed and we were unable to recover it. 00:31:17.662 [2024-07-15 09:40:04.743963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.662 [2024-07-15 09:40:04.744004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.662 [2024-07-15 09:40:04.744017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.662 [2024-07-15 09:40:04.744023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.662 [2024-07-15 09:40:04.744027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.662 [2024-07-15 09:40:04.744038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.662 qpair failed and we were unable to recover it. 00:31:17.662 [2024-07-15 09:40:04.754005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.662 [2024-07-15 09:40:04.754090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.662 [2024-07-15 09:40:04.754101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.662 [2024-07-15 09:40:04.754106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.662 [2024-07-15 09:40:04.754111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.662 [2024-07-15 09:40:04.754121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.662 qpair failed and we were unable to recover it. 00:31:17.662 [2024-07-15 09:40:04.764057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.662 [2024-07-15 09:40:04.764110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.662 [2024-07-15 09:40:04.764121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.662 [2024-07-15 09:40:04.764126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.662 [2024-07-15 09:40:04.764131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.662 [2024-07-15 09:40:04.764141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.662 qpair failed and we were unable to recover it. 00:31:17.663 [2024-07-15 09:40:04.774071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.663 [2024-07-15 09:40:04.774121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.663 [2024-07-15 09:40:04.774131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.663 [2024-07-15 09:40:04.774136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.663 [2024-07-15 09:40:04.774141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.663 [2024-07-15 09:40:04.774151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.663 qpair failed and we were unable to recover it. 00:31:17.663 [2024-07-15 09:40:04.784087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.663 [2024-07-15 09:40:04.784133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.663 [2024-07-15 09:40:04.784144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.663 [2024-07-15 09:40:04.784149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.663 [2024-07-15 09:40:04.784156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.663 [2024-07-15 09:40:04.784166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.663 qpair failed and we were unable to recover it. 00:31:17.663 [2024-07-15 09:40:04.794101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.663 [2024-07-15 09:40:04.794166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.663 [2024-07-15 09:40:04.794177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.663 [2024-07-15 09:40:04.794182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.663 [2024-07-15 09:40:04.794187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.663 [2024-07-15 09:40:04.794197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.663 qpair failed and we were unable to recover it. 00:31:17.663 [2024-07-15 09:40:04.804041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.663 [2024-07-15 09:40:04.804113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.663 [2024-07-15 09:40:04.804124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.663 [2024-07-15 09:40:04.804129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.663 [2024-07-15 09:40:04.804134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.663 [2024-07-15 09:40:04.804144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.663 qpair failed and we were unable to recover it. 00:31:17.663 [2024-07-15 09:40:04.814177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.663 [2024-07-15 09:40:04.814226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.663 [2024-07-15 09:40:04.814237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.663 [2024-07-15 09:40:04.814242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.663 [2024-07-15 09:40:04.814246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.663 [2024-07-15 09:40:04.814256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.663 qpair failed and we were unable to recover it. 00:31:17.663 [2024-07-15 09:40:04.824145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.663 [2024-07-15 09:40:04.824187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.663 [2024-07-15 09:40:04.824197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.663 [2024-07-15 09:40:04.824202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.663 [2024-07-15 09:40:04.824207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.663 [2024-07-15 09:40:04.824217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.663 qpair failed and we were unable to recover it. 00:31:17.663 [2024-07-15 09:40:04.834213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.663 [2024-07-15 09:40:04.834275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.663 [2024-07-15 09:40:04.834285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.663 [2024-07-15 09:40:04.834290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.663 [2024-07-15 09:40:04.834295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.663 [2024-07-15 09:40:04.834305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.663 qpair failed and we were unable to recover it. 00:31:17.663 [2024-07-15 09:40:04.844281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.663 [2024-07-15 09:40:04.844329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.663 [2024-07-15 09:40:04.844340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.663 [2024-07-15 09:40:04.844345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.663 [2024-07-15 09:40:04.844350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.663 [2024-07-15 09:40:04.844360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.663 qpair failed and we were unable to recover it. 00:31:17.663 [2024-07-15 09:40:04.854316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.663 [2024-07-15 09:40:04.854363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.663 [2024-07-15 09:40:04.854374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.663 [2024-07-15 09:40:04.854379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.663 [2024-07-15 09:40:04.854383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.663 [2024-07-15 09:40:04.854393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.663 qpair failed and we were unable to recover it. 00:31:17.925 [2024-07-15 09:40:04.864303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.926 [2024-07-15 09:40:04.864357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.926 [2024-07-15 09:40:04.864368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.926 [2024-07-15 09:40:04.864373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.926 [2024-07-15 09:40:04.864378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.926 [2024-07-15 09:40:04.864388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.926 qpair failed and we were unable to recover it. 00:31:17.926 [2024-07-15 09:40:04.874331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.926 [2024-07-15 09:40:04.874375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.926 [2024-07-15 09:40:04.874385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.926 [2024-07-15 09:40:04.874393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.926 [2024-07-15 09:40:04.874398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.926 [2024-07-15 09:40:04.874408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.926 qpair failed and we were unable to recover it. 00:31:17.926 [2024-07-15 09:40:04.884297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.926 [2024-07-15 09:40:04.884345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.926 [2024-07-15 09:40:04.884356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.926 [2024-07-15 09:40:04.884361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.926 [2024-07-15 09:40:04.884366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.926 [2024-07-15 09:40:04.884376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.926 qpair failed and we were unable to recover it. 00:31:17.926 [2024-07-15 09:40:04.894421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.926 [2024-07-15 09:40:04.894468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.926 [2024-07-15 09:40:04.894479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.926 [2024-07-15 09:40:04.894485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.926 [2024-07-15 09:40:04.894489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.926 [2024-07-15 09:40:04.894499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.926 qpair failed and we were unable to recover it. 00:31:17.926 [2024-07-15 09:40:04.904366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.926 [2024-07-15 09:40:04.904408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.926 [2024-07-15 09:40:04.904421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.926 [2024-07-15 09:40:04.904426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.926 [2024-07-15 09:40:04.904431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.926 [2024-07-15 09:40:04.904442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.926 qpair failed and we were unable to recover it. 00:31:17.926 [2024-07-15 09:40:04.914402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.926 [2024-07-15 09:40:04.914447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.926 [2024-07-15 09:40:04.914459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.926 [2024-07-15 09:40:04.914464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.926 [2024-07-15 09:40:04.914469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.926 [2024-07-15 09:40:04.914480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.926 qpair failed and we were unable to recover it. 00:31:17.926 [2024-07-15 09:40:04.924480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.926 [2024-07-15 09:40:04.924524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.926 [2024-07-15 09:40:04.924535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.926 [2024-07-15 09:40:04.924540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.926 [2024-07-15 09:40:04.924545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.926 [2024-07-15 09:40:04.924555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.926 qpair failed and we were unable to recover it. 00:31:17.926 [2024-07-15 09:40:04.934541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.926 [2024-07-15 09:40:04.934636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.926 [2024-07-15 09:40:04.934647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.926 [2024-07-15 09:40:04.934652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.926 [2024-07-15 09:40:04.934657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.926 [2024-07-15 09:40:04.934667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.926 qpair failed and we were unable to recover it. 00:31:17.926 [2024-07-15 09:40:04.944538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.926 [2024-07-15 09:40:04.944582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.926 [2024-07-15 09:40:04.944593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.926 [2024-07-15 09:40:04.944598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.926 [2024-07-15 09:40:04.944602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.926 [2024-07-15 09:40:04.944613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.926 qpair failed and we were unable to recover it. 00:31:17.926 [2024-07-15 09:40:04.954601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.926 [2024-07-15 09:40:04.954683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.926 [2024-07-15 09:40:04.954693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.926 [2024-07-15 09:40:04.954698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.926 [2024-07-15 09:40:04.954703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.926 [2024-07-15 09:40:04.954713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.926 qpair failed and we were unable to recover it. 00:31:17.926 [2024-07-15 09:40:04.964634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.926 [2024-07-15 09:40:04.964684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.926 [2024-07-15 09:40:04.964695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.926 [2024-07-15 09:40:04.964702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.926 [2024-07-15 09:40:04.964707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.926 [2024-07-15 09:40:04.964717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.926 qpair failed and we were unable to recover it. 00:31:17.926 [2024-07-15 09:40:04.974511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.926 [2024-07-15 09:40:04.974556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.926 [2024-07-15 09:40:04.974567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.926 [2024-07-15 09:40:04.974572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.926 [2024-07-15 09:40:04.974576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.926 [2024-07-15 09:40:04.974587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.926 qpair failed and we were unable to recover it. 00:31:17.926 [2024-07-15 09:40:04.984552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.926 [2024-07-15 09:40:04.984626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.926 [2024-07-15 09:40:04.984637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.926 [2024-07-15 09:40:04.984642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.926 [2024-07-15 09:40:04.984646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.926 [2024-07-15 09:40:04.984656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.926 qpair failed and we were unable to recover it. 00:31:17.926 [2024-07-15 09:40:04.994681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.926 [2024-07-15 09:40:04.994733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.926 [2024-07-15 09:40:04.994744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.926 [2024-07-15 09:40:04.994749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.926 [2024-07-15 09:40:04.994759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.926 [2024-07-15 09:40:04.994769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.926 qpair failed and we were unable to recover it. 00:31:17.926 [2024-07-15 09:40:05.004679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.926 [2024-07-15 09:40:05.004722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.926 [2024-07-15 09:40:05.004733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.927 [2024-07-15 09:40:05.004738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.927 [2024-07-15 09:40:05.004742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.927 [2024-07-15 09:40:05.004756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.927 qpair failed and we were unable to recover it. 00:31:17.927 [2024-07-15 09:40:05.014739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.927 [2024-07-15 09:40:05.014784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.927 [2024-07-15 09:40:05.014795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.927 [2024-07-15 09:40:05.014800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.927 [2024-07-15 09:40:05.014804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.927 [2024-07-15 09:40:05.014815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.927 qpair failed and we were unable to recover it. 00:31:17.927 [2024-07-15 09:40:05.024737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.927 [2024-07-15 09:40:05.024787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.927 [2024-07-15 09:40:05.024798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.927 [2024-07-15 09:40:05.024803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.927 [2024-07-15 09:40:05.024807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.927 [2024-07-15 09:40:05.024817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.927 qpair failed and we were unable to recover it. 00:31:17.927 [2024-07-15 09:40:05.034755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.927 [2024-07-15 09:40:05.034804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.927 [2024-07-15 09:40:05.034815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.927 [2024-07-15 09:40:05.034820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.927 [2024-07-15 09:40:05.034824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.927 [2024-07-15 09:40:05.034835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.927 qpair failed and we were unable to recover it. 00:31:17.927 [2024-07-15 09:40:05.044753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.927 [2024-07-15 09:40:05.044842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.927 [2024-07-15 09:40:05.044853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.927 [2024-07-15 09:40:05.044858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.927 [2024-07-15 09:40:05.044863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.927 [2024-07-15 09:40:05.044873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.927 qpair failed and we were unable to recover it. 00:31:17.927 [2024-07-15 09:40:05.054821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.927 [2024-07-15 09:40:05.054868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.927 [2024-07-15 09:40:05.054882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.927 [2024-07-15 09:40:05.054887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.927 [2024-07-15 09:40:05.054892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.927 [2024-07-15 09:40:05.054902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.927 qpair failed and we were unable to recover it. 00:31:17.927 [2024-07-15 09:40:05.064844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.927 [2024-07-15 09:40:05.064886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.927 [2024-07-15 09:40:05.064896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.927 [2024-07-15 09:40:05.064902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.927 [2024-07-15 09:40:05.064906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.927 [2024-07-15 09:40:05.064916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.927 qpair failed and we were unable to recover it. 00:31:17.927 [2024-07-15 09:40:05.074863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.927 [2024-07-15 09:40:05.074911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.927 [2024-07-15 09:40:05.074922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.927 [2024-07-15 09:40:05.074927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.927 [2024-07-15 09:40:05.074932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.927 [2024-07-15 09:40:05.074942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.927 qpair failed and we were unable to recover it. 00:31:17.927 [2024-07-15 09:40:05.084889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.927 [2024-07-15 09:40:05.084934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.927 [2024-07-15 09:40:05.084945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.927 [2024-07-15 09:40:05.084950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.927 [2024-07-15 09:40:05.084955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.927 [2024-07-15 09:40:05.084965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.927 qpair failed and we were unable to recover it. 00:31:17.927 [2024-07-15 09:40:05.094958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.927 [2024-07-15 09:40:05.095008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.927 [2024-07-15 09:40:05.095019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.927 [2024-07-15 09:40:05.095024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.927 [2024-07-15 09:40:05.095029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.927 [2024-07-15 09:40:05.095042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.927 qpair failed and we were unable to recover it. 00:31:17.927 [2024-07-15 09:40:05.104946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.927 [2024-07-15 09:40:05.104989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.927 [2024-07-15 09:40:05.105001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.927 [2024-07-15 09:40:05.105006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.927 [2024-07-15 09:40:05.105010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.927 [2024-07-15 09:40:05.105020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.927 qpair failed and we were unable to recover it. 00:31:17.927 [2024-07-15 09:40:05.114942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.927 [2024-07-15 09:40:05.114989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.927 [2024-07-15 09:40:05.114999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.927 [2024-07-15 09:40:05.115004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.927 [2024-07-15 09:40:05.115009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:17.927 [2024-07-15 09:40:05.115019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.927 qpair failed and we were unable to recover it. 00:31:18.190 [2024-07-15 09:40:05.124964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.190 [2024-07-15 09:40:05.125004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.190 [2024-07-15 09:40:05.125015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.190 [2024-07-15 09:40:05.125020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.190 [2024-07-15 09:40:05.125025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.190 [2024-07-15 09:40:05.125035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.190 qpair failed and we were unable to recover it. 00:31:18.190 [2024-07-15 09:40:05.135046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.190 [2024-07-15 09:40:05.135095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.190 [2024-07-15 09:40:05.135106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.190 [2024-07-15 09:40:05.135111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.190 [2024-07-15 09:40:05.135116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.190 [2024-07-15 09:40:05.135126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.190 qpair failed and we were unable to recover it. 00:31:18.190 [2024-07-15 09:40:05.145039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.190 [2024-07-15 09:40:05.145081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.190 [2024-07-15 09:40:05.145094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.190 [2024-07-15 09:40:05.145099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.190 [2024-07-15 09:40:05.145104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.190 [2024-07-15 09:40:05.145113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.190 qpair failed and we were unable to recover it. 00:31:18.190 [2024-07-15 09:40:05.155030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.190 [2024-07-15 09:40:05.155075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.190 [2024-07-15 09:40:05.155086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.190 [2024-07-15 09:40:05.155091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.190 [2024-07-15 09:40:05.155095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.190 [2024-07-15 09:40:05.155106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.190 qpair failed and we were unable to recover it. 00:31:18.190 [2024-07-15 09:40:05.165095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.190 [2024-07-15 09:40:05.165138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.190 [2024-07-15 09:40:05.165148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.190 [2024-07-15 09:40:05.165153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.190 [2024-07-15 09:40:05.165158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.190 [2024-07-15 09:40:05.165168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.190 qpair failed and we were unable to recover it. 00:31:18.190 [2024-07-15 09:40:05.175033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.190 [2024-07-15 09:40:05.175079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.190 [2024-07-15 09:40:05.175090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.190 [2024-07-15 09:40:05.175094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.190 [2024-07-15 09:40:05.175099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.190 [2024-07-15 09:40:05.175109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.190 qpair failed and we were unable to recover it. 00:31:18.190 [2024-07-15 09:40:05.185127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.190 [2024-07-15 09:40:05.185200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.190 [2024-07-15 09:40:05.185211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.190 [2024-07-15 09:40:05.185216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.190 [2024-07-15 09:40:05.185223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.190 [2024-07-15 09:40:05.185233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.190 qpair failed and we were unable to recover it. 00:31:18.190 [2024-07-15 09:40:05.195154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.190 [2024-07-15 09:40:05.195203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.190 [2024-07-15 09:40:05.195213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.190 [2024-07-15 09:40:05.195218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.190 [2024-07-15 09:40:05.195223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.190 [2024-07-15 09:40:05.195233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.190 qpair failed and we were unable to recover it. 00:31:18.190 [2024-07-15 09:40:05.205201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.190 [2024-07-15 09:40:05.205244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.190 [2024-07-15 09:40:05.205255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.190 [2024-07-15 09:40:05.205260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.190 [2024-07-15 09:40:05.205265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.190 [2024-07-15 09:40:05.205275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.190 qpair failed and we were unable to recover it. 00:31:18.190 [2024-07-15 09:40:05.215262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.190 [2024-07-15 09:40:05.215304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.190 [2024-07-15 09:40:05.215315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.190 [2024-07-15 09:40:05.215320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.190 [2024-07-15 09:40:05.215324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.190 [2024-07-15 09:40:05.215335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.190 qpair failed and we were unable to recover it. 00:31:18.190 [2024-07-15 09:40:05.225259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.190 [2024-07-15 09:40:05.225342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.190 [2024-07-15 09:40:05.225352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.190 [2024-07-15 09:40:05.225358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.190 [2024-07-15 09:40:05.225363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.190 [2024-07-15 09:40:05.225373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.190 qpair failed and we were unable to recover it. 00:31:18.190 [2024-07-15 09:40:05.235255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.190 [2024-07-15 09:40:05.235303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.190 [2024-07-15 09:40:05.235314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.190 [2024-07-15 09:40:05.235319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.190 [2024-07-15 09:40:05.235323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.190 [2024-07-15 09:40:05.235334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.190 qpair failed and we were unable to recover it. 00:31:18.190 [2024-07-15 09:40:05.245375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.190 [2024-07-15 09:40:05.245418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.190 [2024-07-15 09:40:05.245429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.191 [2024-07-15 09:40:05.245434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.191 [2024-07-15 09:40:05.245438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.191 [2024-07-15 09:40:05.245449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.191 qpair failed and we were unable to recover it. 00:31:18.191 [2024-07-15 09:40:05.255375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.191 [2024-07-15 09:40:05.255454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.191 [2024-07-15 09:40:05.255472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.191 [2024-07-15 09:40:05.255480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.191 [2024-07-15 09:40:05.255485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.191 [2024-07-15 09:40:05.255499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.191 qpair failed and we were unable to recover it. 00:31:18.191 [2024-07-15 09:40:05.265271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.191 [2024-07-15 09:40:05.265316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.191 [2024-07-15 09:40:05.265328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.191 [2024-07-15 09:40:05.265333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.191 [2024-07-15 09:40:05.265338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.191 [2024-07-15 09:40:05.265349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.191 qpair failed and we were unable to recover it. 00:31:18.191 [2024-07-15 09:40:05.275251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.191 [2024-07-15 09:40:05.275295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.191 [2024-07-15 09:40:05.275306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.191 [2024-07-15 09:40:05.275314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.191 [2024-07-15 09:40:05.275319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.191 [2024-07-15 09:40:05.275330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.191 qpair failed and we were unable to recover it. 00:31:18.191 [2024-07-15 09:40:05.285408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.191 [2024-07-15 09:40:05.285451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.191 [2024-07-15 09:40:05.285462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.191 [2024-07-15 09:40:05.285468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.191 [2024-07-15 09:40:05.285472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.191 [2024-07-15 09:40:05.285483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.191 qpair failed and we were unable to recover it. 00:31:18.191 [2024-07-15 09:40:05.295477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.191 [2024-07-15 09:40:05.295526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.191 [2024-07-15 09:40:05.295536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.191 [2024-07-15 09:40:05.295541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.191 [2024-07-15 09:40:05.295546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.191 [2024-07-15 09:40:05.295556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.191 qpair failed and we were unable to recover it. 00:31:18.191 [2024-07-15 09:40:05.305463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.191 [2024-07-15 09:40:05.305511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.191 [2024-07-15 09:40:05.305523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.191 [2024-07-15 09:40:05.305528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.191 [2024-07-15 09:40:05.305532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.191 [2024-07-15 09:40:05.305542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.191 qpair failed and we were unable to recover it. 00:31:18.191 [2024-07-15 09:40:05.315509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.191 [2024-07-15 09:40:05.315556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.191 [2024-07-15 09:40:05.315574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.191 [2024-07-15 09:40:05.315580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.191 [2024-07-15 09:40:05.315585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.191 [2024-07-15 09:40:05.315599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.191 qpair failed and we were unable to recover it. 00:31:18.191 [2024-07-15 09:40:05.325407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.191 [2024-07-15 09:40:05.325468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.191 [2024-07-15 09:40:05.325481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.191 [2024-07-15 09:40:05.325486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.191 [2024-07-15 09:40:05.325490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.191 [2024-07-15 09:40:05.325502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.191 qpair failed and we were unable to recover it. 00:31:18.191 [2024-07-15 09:40:05.335634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.191 [2024-07-15 09:40:05.335681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.191 [2024-07-15 09:40:05.335693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.191 [2024-07-15 09:40:05.335698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.191 [2024-07-15 09:40:05.335703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.191 [2024-07-15 09:40:05.335714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.191 qpair failed and we were unable to recover it. 00:31:18.191 [2024-07-15 09:40:05.345595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.191 [2024-07-15 09:40:05.345642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.191 [2024-07-15 09:40:05.345654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.191 [2024-07-15 09:40:05.345659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.191 [2024-07-15 09:40:05.345664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.191 [2024-07-15 09:40:05.345674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.191 qpair failed and we were unable to recover it. 00:31:18.191 [2024-07-15 09:40:05.355590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.191 [2024-07-15 09:40:05.355637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.191 [2024-07-15 09:40:05.355648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.191 [2024-07-15 09:40:05.355653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.191 [2024-07-15 09:40:05.355658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.192 [2024-07-15 09:40:05.355668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.192 qpair failed and we were unable to recover it. 00:31:18.192 [2024-07-15 09:40:05.365632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.192 [2024-07-15 09:40:05.365669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.192 [2024-07-15 09:40:05.365680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.192 [2024-07-15 09:40:05.365691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.192 [2024-07-15 09:40:05.365695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.192 [2024-07-15 09:40:05.365706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.192 qpair failed and we were unable to recover it. 00:31:18.192 [2024-07-15 09:40:05.375703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.192 [2024-07-15 09:40:05.375798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.192 [2024-07-15 09:40:05.375810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.192 [2024-07-15 09:40:05.375816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.192 [2024-07-15 09:40:05.375820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.192 [2024-07-15 09:40:05.375831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.192 qpair failed and we were unable to recover it. 00:31:18.192 [2024-07-15 09:40:05.385715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.192 [2024-07-15 09:40:05.385761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.192 [2024-07-15 09:40:05.385772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.192 [2024-07-15 09:40:05.385778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.192 [2024-07-15 09:40:05.385782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.192 [2024-07-15 09:40:05.385793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.192 qpair failed and we were unable to recover it. 00:31:18.454 [2024-07-15 09:40:05.395728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.454 [2024-07-15 09:40:05.395776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.454 [2024-07-15 09:40:05.395787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.454 [2024-07-15 09:40:05.395792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.454 [2024-07-15 09:40:05.395796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.454 [2024-07-15 09:40:05.395807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.454 qpair failed and we were unable to recover it. 00:31:18.454 [2024-07-15 09:40:05.405738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.454 [2024-07-15 09:40:05.405833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.454 [2024-07-15 09:40:05.405845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.454 [2024-07-15 09:40:05.405850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.454 [2024-07-15 09:40:05.405855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.454 [2024-07-15 09:40:05.405866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.454 qpair failed and we were unable to recover it. 00:31:18.454 [2024-07-15 09:40:05.415799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.454 [2024-07-15 09:40:05.415848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.454 [2024-07-15 09:40:05.415859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.454 [2024-07-15 09:40:05.415864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.415869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.415879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.425672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.425718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.425729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.425734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.425739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.425749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.435827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.435873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.435883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.435889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.435893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.435904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.445870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.445958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.445969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.445974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.445978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.445989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.455907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.455958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.455972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.455978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.455982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.455993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.465871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.465914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.465925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.465931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.465935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.465945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.475929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.475978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.475989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.475994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.475999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.476009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.485948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.485988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.485999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.486004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.486008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.486019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.496044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.496126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.496137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.496142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.496147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.496161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.505991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.506033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.506044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.506050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.506054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.506064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.516048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.516099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.516110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.516115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.516119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.516130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.526041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.526079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.526090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.526095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.526100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.526110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.536127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.536196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.536206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.536211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.536216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.536226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.545987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.546031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.546044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.546050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.546054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.546064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.556150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.556196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.556207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.556213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.556218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.556228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.566165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.566207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.566218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.566223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.566228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.566239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.576215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.576259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.576269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.576274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.576279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.576289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.586241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.586281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.586292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.586297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.586304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.586315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.596257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.596310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.596321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.596326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.596330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.596341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.606250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.606294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.606304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.606309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.606314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.606324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.616221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.616269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.616280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.616285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.616289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.616299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.626340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.626381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.626392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.626397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.626402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.626412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.636232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.636282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.636292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.636297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.636302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.636312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.455 [2024-07-15 09:40:05.646411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.455 [2024-07-15 09:40:05.646457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.455 [2024-07-15 09:40:05.646467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.455 [2024-07-15 09:40:05.646472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.455 [2024-07-15 09:40:05.646477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.455 [2024-07-15 09:40:05.646488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.455 qpair failed and we were unable to recover it. 00:31:18.717 [2024-07-15 09:40:05.656458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.717 [2024-07-15 09:40:05.656504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.717 [2024-07-15 09:40:05.656515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.717 [2024-07-15 09:40:05.656520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.717 [2024-07-15 09:40:05.656524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.717 [2024-07-15 09:40:05.656534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.717 qpair failed and we were unable to recover it. 00:31:18.717 [2024-07-15 09:40:05.666423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.717 [2024-07-15 09:40:05.666466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.717 [2024-07-15 09:40:05.666477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.717 [2024-07-15 09:40:05.666482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.717 [2024-07-15 09:40:05.666487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.717 [2024-07-15 09:40:05.666497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.717 qpair failed and we were unable to recover it. 00:31:18.717 [2024-07-15 09:40:05.676469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.718 [2024-07-15 09:40:05.676547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.718 [2024-07-15 09:40:05.676565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.718 [2024-07-15 09:40:05.676571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.718 [2024-07-15 09:40:05.676579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.718 [2024-07-15 09:40:05.676592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.718 qpair failed and we were unable to recover it. 00:31:18.718 [2024-07-15 09:40:05.686485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.718 [2024-07-15 09:40:05.686528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.718 [2024-07-15 09:40:05.686546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.718 [2024-07-15 09:40:05.686553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.718 [2024-07-15 09:40:05.686557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.718 [2024-07-15 09:40:05.686571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.718 qpair failed and we were unable to recover it. 00:31:18.718 [2024-07-15 09:40:05.696481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.718 [2024-07-15 09:40:05.696532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.718 [2024-07-15 09:40:05.696550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.718 [2024-07-15 09:40:05.696556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.718 [2024-07-15 09:40:05.696561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.718 [2024-07-15 09:40:05.696575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.718 qpair failed and we were unable to recover it. 00:31:18.718 [2024-07-15 09:40:05.706550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.718 [2024-07-15 09:40:05.706596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.718 [2024-07-15 09:40:05.706607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.718 [2024-07-15 09:40:05.706612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.718 [2024-07-15 09:40:05.706617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.718 [2024-07-15 09:40:05.706628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.718 qpair failed and we were unable to recover it. 00:31:18.718 [2024-07-15 09:40:05.716450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.718 [2024-07-15 09:40:05.716499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.718 [2024-07-15 09:40:05.716510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.718 [2024-07-15 09:40:05.716515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.718 [2024-07-15 09:40:05.716520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.718 [2024-07-15 09:40:05.716531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.718 qpair failed and we were unable to recover it. 00:31:18.718 [2024-07-15 09:40:05.726591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.718 [2024-07-15 09:40:05.726632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.718 [2024-07-15 09:40:05.726643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.718 [2024-07-15 09:40:05.726648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.718 [2024-07-15 09:40:05.726653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.718 [2024-07-15 09:40:05.726664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.718 qpair failed and we were unable to recover it. 00:31:18.718 [2024-07-15 09:40:05.736672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.718 [2024-07-15 09:40:05.736713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.718 [2024-07-15 09:40:05.736724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.718 [2024-07-15 09:40:05.736729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.718 [2024-07-15 09:40:05.736734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.718 [2024-07-15 09:40:05.736744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.718 qpair failed and we were unable to recover it. 00:31:18.718 [2024-07-15 09:40:05.746525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.718 [2024-07-15 09:40:05.746570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.718 [2024-07-15 09:40:05.746581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.718 [2024-07-15 09:40:05.746587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.718 [2024-07-15 09:40:05.746591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.718 [2024-07-15 09:40:05.746601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.718 qpair failed and we were unable to recover it. 00:31:18.718 [2024-07-15 09:40:05.756699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.718 [2024-07-15 09:40:05.756744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.718 [2024-07-15 09:40:05.756758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.718 [2024-07-15 09:40:05.756763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.718 [2024-07-15 09:40:05.756768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.718 [2024-07-15 09:40:05.756779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.718 qpair failed and we were unable to recover it. 00:31:18.718 [2024-07-15 09:40:05.766706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.718 [2024-07-15 09:40:05.766756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.718 [2024-07-15 09:40:05.766768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.718 [2024-07-15 09:40:05.766775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.718 [2024-07-15 09:40:05.766780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.718 [2024-07-15 09:40:05.766791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.718 qpair failed and we were unable to recover it. 00:31:18.718 [2024-07-15 09:40:05.776756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.718 [2024-07-15 09:40:05.776799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.718 [2024-07-15 09:40:05.776810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.718 [2024-07-15 09:40:05.776815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.718 [2024-07-15 09:40:05.776819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.718 [2024-07-15 09:40:05.776830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.718 qpair failed and we were unable to recover it. 00:31:18.718 [2024-07-15 09:40:05.786734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.718 [2024-07-15 09:40:05.786778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.718 [2024-07-15 09:40:05.786789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.718 [2024-07-15 09:40:05.786794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.718 [2024-07-15 09:40:05.786799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.718 [2024-07-15 09:40:05.786809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.718 qpair failed and we were unable to recover it. 00:31:18.718 [2024-07-15 09:40:05.796798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.718 [2024-07-15 09:40:05.796843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.718 [2024-07-15 09:40:05.796855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.718 [2024-07-15 09:40:05.796860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.718 [2024-07-15 09:40:05.796864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.718 [2024-07-15 09:40:05.796875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.718 qpair failed and we were unable to recover it. 00:31:18.718 [2024-07-15 09:40:05.806814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.718 [2024-07-15 09:40:05.806853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.718 [2024-07-15 09:40:05.806864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.718 [2024-07-15 09:40:05.806869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.718 [2024-07-15 09:40:05.806873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.718 [2024-07-15 09:40:05.806884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.718 qpair failed and we were unable to recover it. 00:31:18.718 [2024-07-15 09:40:05.816871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.718 [2024-07-15 09:40:05.816911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.718 [2024-07-15 09:40:05.816922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.718 [2024-07-15 09:40:05.816927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.719 [2024-07-15 09:40:05.816931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.719 [2024-07-15 09:40:05.816942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.719 qpair failed and we were unable to recover it. 00:31:18.719 [2024-07-15 09:40:05.826867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.719 [2024-07-15 09:40:05.826909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.719 [2024-07-15 09:40:05.826920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.719 [2024-07-15 09:40:05.826925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.719 [2024-07-15 09:40:05.826930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.719 [2024-07-15 09:40:05.826940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.719 qpair failed and we were unable to recover it. 00:31:18.719 [2024-07-15 09:40:05.836898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.719 [2024-07-15 09:40:05.836950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.719 [2024-07-15 09:40:05.836961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.719 [2024-07-15 09:40:05.836966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.719 [2024-07-15 09:40:05.836971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.719 [2024-07-15 09:40:05.836981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.719 qpair failed and we were unable to recover it. 00:31:18.719 [2024-07-15 09:40:05.846917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.719 [2024-07-15 09:40:05.846960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.719 [2024-07-15 09:40:05.846971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.719 [2024-07-15 09:40:05.846976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.719 [2024-07-15 09:40:05.846980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.719 [2024-07-15 09:40:05.846991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.719 qpair failed and we were unable to recover it. 00:31:18.719 [2024-07-15 09:40:05.856992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.719 [2024-07-15 09:40:05.857039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.719 [2024-07-15 09:40:05.857052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.719 [2024-07-15 09:40:05.857057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.719 [2024-07-15 09:40:05.857062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.719 [2024-07-15 09:40:05.857072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.719 qpair failed and we were unable to recover it. 00:31:18.719 [2024-07-15 09:40:05.866858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.719 [2024-07-15 09:40:05.866904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.719 [2024-07-15 09:40:05.866915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.719 [2024-07-15 09:40:05.866920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.719 [2024-07-15 09:40:05.866924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.719 [2024-07-15 09:40:05.866934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.719 qpair failed and we were unable to recover it. 00:31:18.719 [2024-07-15 09:40:05.877016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.719 [2024-07-15 09:40:05.877060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.719 [2024-07-15 09:40:05.877071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.719 [2024-07-15 09:40:05.877076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.719 [2024-07-15 09:40:05.877080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.719 [2024-07-15 09:40:05.877090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.719 qpair failed and we were unable to recover it. 00:31:18.719 [2024-07-15 09:40:05.886894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.719 [2024-07-15 09:40:05.886941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.719 [2024-07-15 09:40:05.886952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.719 [2024-07-15 09:40:05.886957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.719 [2024-07-15 09:40:05.886961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.719 [2024-07-15 09:40:05.886971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.719 qpair failed and we were unable to recover it. 00:31:18.719 [2024-07-15 09:40:05.897107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.719 [2024-07-15 09:40:05.897178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.719 [2024-07-15 09:40:05.897189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.719 [2024-07-15 09:40:05.897195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.719 [2024-07-15 09:40:05.897199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.719 [2024-07-15 09:40:05.897214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.719 qpair failed and we were unable to recover it. 00:31:18.719 [2024-07-15 09:40:05.907064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.719 [2024-07-15 09:40:05.907108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.719 [2024-07-15 09:40:05.907119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.719 [2024-07-15 09:40:05.907124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.719 [2024-07-15 09:40:05.907129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.719 [2024-07-15 09:40:05.907139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.719 qpair failed and we were unable to recover it. 00:31:18.981 [2024-07-15 09:40:05.917104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.981 [2024-07-15 09:40:05.917154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.981 [2024-07-15 09:40:05.917165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.981 [2024-07-15 09:40:05.917170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.981 [2024-07-15 09:40:05.917174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.981 [2024-07-15 09:40:05.917185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.981 qpair failed and we were unable to recover it. 00:31:18.981 [2024-07-15 09:40:05.927188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.981 [2024-07-15 09:40:05.927276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.981 [2024-07-15 09:40:05.927287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.981 [2024-07-15 09:40:05.927293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.981 [2024-07-15 09:40:05.927297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.981 [2024-07-15 09:40:05.927308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.981 qpair failed and we were unable to recover it. 00:31:18.981 [2024-07-15 09:40:05.937195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.981 [2024-07-15 09:40:05.937246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.981 [2024-07-15 09:40:05.937257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.981 [2024-07-15 09:40:05.937262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.981 [2024-07-15 09:40:05.937267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.981 [2024-07-15 09:40:05.937277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.981 qpair failed and we were unable to recover it. 00:31:18.981 [2024-07-15 09:40:05.947066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.981 [2024-07-15 09:40:05.947114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.981 [2024-07-15 09:40:05.947127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.981 [2024-07-15 09:40:05.947132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.981 [2024-07-15 09:40:05.947136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.981 [2024-07-15 09:40:05.947147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.981 qpair failed and we were unable to recover it. 00:31:18.981 [2024-07-15 09:40:05.957302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.981 [2024-07-15 09:40:05.957376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.981 [2024-07-15 09:40:05.957387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.981 [2024-07-15 09:40:05.957392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.981 [2024-07-15 09:40:05.957396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.981 [2024-07-15 09:40:05.957407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.981 qpair failed and we were unable to recover it. 00:31:18.982 [2024-07-15 09:40:05.967232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.982 [2024-07-15 09:40:05.967273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.982 [2024-07-15 09:40:05.967283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.982 [2024-07-15 09:40:05.967288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.982 [2024-07-15 09:40:05.967293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.982 [2024-07-15 09:40:05.967303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.982 qpair failed and we were unable to recover it. 00:31:18.982 [2024-07-15 09:40:05.977312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.982 [2024-07-15 09:40:05.977355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.982 [2024-07-15 09:40:05.977366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.982 [2024-07-15 09:40:05.977370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.982 [2024-07-15 09:40:05.977375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.982 [2024-07-15 09:40:05.977385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.982 qpair failed and we were unable to recover it. 00:31:18.982 [2024-07-15 09:40:05.987271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.982 [2024-07-15 09:40:05.987318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.982 [2024-07-15 09:40:05.987329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.982 [2024-07-15 09:40:05.987334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.982 [2024-07-15 09:40:05.987338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.982 [2024-07-15 09:40:05.987351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.982 qpair failed and we were unable to recover it. 00:31:18.982 [2024-07-15 09:40:05.997331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.982 [2024-07-15 09:40:05.997383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.982 [2024-07-15 09:40:05.997395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.982 [2024-07-15 09:40:05.997400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.982 [2024-07-15 09:40:05.997405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.982 [2024-07-15 09:40:05.997416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.982 qpair failed and we were unable to recover it. 00:31:18.982 [2024-07-15 09:40:06.007207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.982 [2024-07-15 09:40:06.007250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.982 [2024-07-15 09:40:06.007261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.982 [2024-07-15 09:40:06.007266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.982 [2024-07-15 09:40:06.007271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.982 [2024-07-15 09:40:06.007281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.982 qpair failed and we were unable to recover it. 00:31:18.982 [2024-07-15 09:40:06.017401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.982 [2024-07-15 09:40:06.017444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.982 [2024-07-15 09:40:06.017455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.982 [2024-07-15 09:40:06.017460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.982 [2024-07-15 09:40:06.017465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.982 [2024-07-15 09:40:06.017475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.982 qpair failed and we were unable to recover it. 00:31:18.982 [2024-07-15 09:40:06.027459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.982 [2024-07-15 09:40:06.027544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.982 [2024-07-15 09:40:06.027555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.982 [2024-07-15 09:40:06.027562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.982 [2024-07-15 09:40:06.027566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.982 [2024-07-15 09:40:06.027576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.982 qpair failed and we were unable to recover it. 00:31:18.982 [2024-07-15 09:40:06.037429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.982 [2024-07-15 09:40:06.037484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.982 [2024-07-15 09:40:06.037495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.982 [2024-07-15 09:40:06.037500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.982 [2024-07-15 09:40:06.037504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.982 [2024-07-15 09:40:06.037515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.982 qpair failed and we were unable to recover it. 00:31:18.982 [2024-07-15 09:40:06.047437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.982 [2024-07-15 09:40:06.047483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.982 [2024-07-15 09:40:06.047501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.982 [2024-07-15 09:40:06.047507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.982 [2024-07-15 09:40:06.047512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.982 [2024-07-15 09:40:06.047526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.982 qpair failed and we were unable to recover it. 00:31:18.982 [2024-07-15 09:40:06.057530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.982 [2024-07-15 09:40:06.057578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.982 [2024-07-15 09:40:06.057597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.982 [2024-07-15 09:40:06.057603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.982 [2024-07-15 09:40:06.057608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.982 [2024-07-15 09:40:06.057621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.982 qpair failed and we were unable to recover it. 00:31:18.982 [2024-07-15 09:40:06.067519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.982 [2024-07-15 09:40:06.067568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.982 [2024-07-15 09:40:06.067586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.982 [2024-07-15 09:40:06.067592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.982 [2024-07-15 09:40:06.067597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.982 [2024-07-15 09:40:06.067611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.982 qpair failed and we were unable to recover it. 00:31:18.982 [2024-07-15 09:40:06.077517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.982 [2024-07-15 09:40:06.077599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.982 [2024-07-15 09:40:06.077618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.982 [2024-07-15 09:40:06.077625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.982 [2024-07-15 09:40:06.077632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.982 [2024-07-15 09:40:06.077646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.982 qpair failed and we were unable to recover it. 00:31:18.982 [2024-07-15 09:40:06.087587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.982 [2024-07-15 09:40:06.087658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.982 [2024-07-15 09:40:06.087670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.982 [2024-07-15 09:40:06.087675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.982 [2024-07-15 09:40:06.087680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.982 [2024-07-15 09:40:06.087691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.982 qpair failed and we were unable to recover it. 00:31:18.982 [2024-07-15 09:40:06.097605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.982 [2024-07-15 09:40:06.097678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.982 [2024-07-15 09:40:06.097690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.982 [2024-07-15 09:40:06.097695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.982 [2024-07-15 09:40:06.097700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.982 [2024-07-15 09:40:06.097710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.982 qpair failed and we were unable to recover it. 00:31:18.982 [2024-07-15 09:40:06.107625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.982 [2024-07-15 09:40:06.107667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.983 [2024-07-15 09:40:06.107679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.983 [2024-07-15 09:40:06.107684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.983 [2024-07-15 09:40:06.107688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.983 [2024-07-15 09:40:06.107699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.983 qpair failed and we were unable to recover it. 00:31:18.983 [2024-07-15 09:40:06.117532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.983 [2024-07-15 09:40:06.117592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.983 [2024-07-15 09:40:06.117604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.983 [2024-07-15 09:40:06.117609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.983 [2024-07-15 09:40:06.117613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.983 [2024-07-15 09:40:06.117624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.983 qpair failed and we were unable to recover it. 00:31:18.983 [2024-07-15 09:40:06.127692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.983 [2024-07-15 09:40:06.127740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.983 [2024-07-15 09:40:06.127756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.983 [2024-07-15 09:40:06.127761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.983 [2024-07-15 09:40:06.127766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.983 [2024-07-15 09:40:06.127776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.983 qpair failed and we were unable to recover it. 00:31:18.983 [2024-07-15 09:40:06.137760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.983 [2024-07-15 09:40:06.137806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.983 [2024-07-15 09:40:06.137817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.983 [2024-07-15 09:40:06.137822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.983 [2024-07-15 09:40:06.137826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.983 [2024-07-15 09:40:06.137837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.983 qpair failed and we were unable to recover it. 00:31:18.983 [2024-07-15 09:40:06.147610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.983 [2024-07-15 09:40:06.147652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.983 [2024-07-15 09:40:06.147663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.983 [2024-07-15 09:40:06.147668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.983 [2024-07-15 09:40:06.147672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.983 [2024-07-15 09:40:06.147683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.983 qpair failed and we were unable to recover it. 00:31:18.983 [2024-07-15 09:40:06.157777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.983 [2024-07-15 09:40:06.157826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.983 [2024-07-15 09:40:06.157837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.983 [2024-07-15 09:40:06.157842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.983 [2024-07-15 09:40:06.157846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.983 [2024-07-15 09:40:06.157857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.983 qpair failed and we were unable to recover it. 00:31:18.983 [2024-07-15 09:40:06.167792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.983 [2024-07-15 09:40:06.167836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.983 [2024-07-15 09:40:06.167846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.983 [2024-07-15 09:40:06.167854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.983 [2024-07-15 09:40:06.167860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.983 [2024-07-15 09:40:06.167870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.983 qpair failed and we were unable to recover it. 00:31:18.983 [2024-07-15 09:40:06.177727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.983 [2024-07-15 09:40:06.177780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.983 [2024-07-15 09:40:06.177791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.983 [2024-07-15 09:40:06.177796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.983 [2024-07-15 09:40:06.177801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:18.983 [2024-07-15 09:40:06.177811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.983 qpair failed and we were unable to recover it. 00:31:19.245 [2024-07-15 09:40:06.187847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.245 [2024-07-15 09:40:06.187892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.245 [2024-07-15 09:40:06.187903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.245 [2024-07-15 09:40:06.187908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.245 [2024-07-15 09:40:06.187913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.245 [2024-07-15 09:40:06.187924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.245 qpair failed and we were unable to recover it. 00:31:19.245 [2024-07-15 09:40:06.197883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.245 [2024-07-15 09:40:06.197934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.245 [2024-07-15 09:40:06.197945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.245 [2024-07-15 09:40:06.197950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.245 [2024-07-15 09:40:06.197955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.245 [2024-07-15 09:40:06.197966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.245 qpair failed and we were unable to recover it. 00:31:19.245 [2024-07-15 09:40:06.207762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.245 [2024-07-15 09:40:06.207807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.245 [2024-07-15 09:40:06.207818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.245 [2024-07-15 09:40:06.207823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.245 [2024-07-15 09:40:06.207827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.245 [2024-07-15 09:40:06.207838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.245 qpair failed and we were unable to recover it. 00:31:19.245 [2024-07-15 09:40:06.217830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.245 [2024-07-15 09:40:06.217873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.245 [2024-07-15 09:40:06.217884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.245 [2024-07-15 09:40:06.217889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.245 [2024-07-15 09:40:06.217893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.245 [2024-07-15 09:40:06.217904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.245 qpair failed and we were unable to recover it. 00:31:19.246 [2024-07-15 09:40:06.227949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.246 [2024-07-15 09:40:06.227991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.246 [2024-07-15 09:40:06.228001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.246 [2024-07-15 09:40:06.228007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.246 [2024-07-15 09:40:06.228011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.246 [2024-07-15 09:40:06.228021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.246 qpair failed and we were unable to recover it. 00:31:19.246 [2024-07-15 09:40:06.237952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.246 [2024-07-15 09:40:06.237999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.246 [2024-07-15 09:40:06.238010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.246 [2024-07-15 09:40:06.238015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.246 [2024-07-15 09:40:06.238020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.246 [2024-07-15 09:40:06.238030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.246 qpair failed and we were unable to recover it. 00:31:19.246 [2024-07-15 09:40:06.248024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.246 [2024-07-15 09:40:06.248103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.246 [2024-07-15 09:40:06.248114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.246 [2024-07-15 09:40:06.248119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.246 [2024-07-15 09:40:06.248124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.246 [2024-07-15 09:40:06.248135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.246 qpair failed and we were unable to recover it. 00:31:19.246 [2024-07-15 09:40:06.258078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.246 [2024-07-15 09:40:06.258125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.246 [2024-07-15 09:40:06.258138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.246 [2024-07-15 09:40:06.258143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.246 [2024-07-15 09:40:06.258148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.246 [2024-07-15 09:40:06.258158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.246 qpair failed and we were unable to recover it. 00:31:19.246 [2024-07-15 09:40:06.267937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.246 [2024-07-15 09:40:06.267986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.246 [2024-07-15 09:40:06.267996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.246 [2024-07-15 09:40:06.268001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.246 [2024-07-15 09:40:06.268006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.246 [2024-07-15 09:40:06.268016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.246 qpair failed and we were unable to recover it. 00:31:19.246 [2024-07-15 09:40:06.278008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.246 [2024-07-15 09:40:06.278058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.246 [2024-07-15 09:40:06.278069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.246 [2024-07-15 09:40:06.278074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.246 [2024-07-15 09:40:06.278078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.246 [2024-07-15 09:40:06.278089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.246 qpair failed and we were unable to recover it. 00:31:19.246 [2024-07-15 09:40:06.288113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.246 [2024-07-15 09:40:06.288174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.246 [2024-07-15 09:40:06.288185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.246 [2024-07-15 09:40:06.288189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.246 [2024-07-15 09:40:06.288194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.246 [2024-07-15 09:40:06.288204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.246 qpair failed and we were unable to recover it. 00:31:19.246 [2024-07-15 09:40:06.298068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.246 [2024-07-15 09:40:06.298123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.246 [2024-07-15 09:40:06.298134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.246 [2024-07-15 09:40:06.298139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.246 [2024-07-15 09:40:06.298144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.246 [2024-07-15 09:40:06.298157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.246 qpair failed and we were unable to recover it. 00:31:19.246 [2024-07-15 09:40:06.308156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.246 [2024-07-15 09:40:06.308213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.246 [2024-07-15 09:40:06.308225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.246 [2024-07-15 09:40:06.308230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.246 [2024-07-15 09:40:06.308235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.246 [2024-07-15 09:40:06.308246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.246 qpair failed and we were unable to recover it. 00:31:19.246 [2024-07-15 09:40:06.318183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.246 [2024-07-15 09:40:06.318230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.246 [2024-07-15 09:40:06.318241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.246 [2024-07-15 09:40:06.318246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.246 [2024-07-15 09:40:06.318250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.246 [2024-07-15 09:40:06.318261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.246 qpair failed and we were unable to recover it. 00:31:19.246 [2024-07-15 09:40:06.328303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.246 [2024-07-15 09:40:06.328385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.246 [2024-07-15 09:40:06.328397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.246 [2024-07-15 09:40:06.328402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.246 [2024-07-15 09:40:06.328407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.246 [2024-07-15 09:40:06.328417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.246 qpair failed and we were unable to recover it. 00:31:19.246 [2024-07-15 09:40:06.338299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.246 [2024-07-15 09:40:06.338346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.246 [2024-07-15 09:40:06.338357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.246 [2024-07-15 09:40:06.338362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.246 [2024-07-15 09:40:06.338366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.246 [2024-07-15 09:40:06.338377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.246 qpair failed and we were unable to recover it. 00:31:19.246 [2024-07-15 09:40:06.348186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.246 [2024-07-15 09:40:06.348241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.246 [2024-07-15 09:40:06.348254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.246 [2024-07-15 09:40:06.348259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.246 [2024-07-15 09:40:06.348264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.246 [2024-07-15 09:40:06.348274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.246 qpair failed and we were unable to recover it. 00:31:19.246 [2024-07-15 09:40:06.358311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.246 [2024-07-15 09:40:06.358358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.246 [2024-07-15 09:40:06.358369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.246 [2024-07-15 09:40:06.358374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.246 [2024-07-15 09:40:06.358379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.246 [2024-07-15 09:40:06.358389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.246 qpair failed and we were unable to recover it. 00:31:19.246 [2024-07-15 09:40:06.368332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.246 [2024-07-15 09:40:06.368373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.247 [2024-07-15 09:40:06.368385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.247 [2024-07-15 09:40:06.368390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.247 [2024-07-15 09:40:06.368395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.247 [2024-07-15 09:40:06.368406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.247 qpair failed and we were unable to recover it. 00:31:19.247 [2024-07-15 09:40:06.378266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.247 [2024-07-15 09:40:06.378317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.247 [2024-07-15 09:40:06.378327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.247 [2024-07-15 09:40:06.378332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.247 [2024-07-15 09:40:06.378337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.247 [2024-07-15 09:40:06.378347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.247 qpair failed and we were unable to recover it. 00:31:19.247 [2024-07-15 09:40:06.388383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.247 [2024-07-15 09:40:06.388428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.247 [2024-07-15 09:40:06.388439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.247 [2024-07-15 09:40:06.388444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.247 [2024-07-15 09:40:06.388448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.247 [2024-07-15 09:40:06.388461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.247 qpair failed and we were unable to recover it. 00:31:19.247 [2024-07-15 09:40:06.398423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.247 [2024-07-15 09:40:06.398467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.247 [2024-07-15 09:40:06.398478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.247 [2024-07-15 09:40:06.398484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.247 [2024-07-15 09:40:06.398488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.247 [2024-07-15 09:40:06.398498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.247 qpair failed and we were unable to recover it. 00:31:19.247 [2024-07-15 09:40:06.408431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.247 [2024-07-15 09:40:06.408479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.247 [2024-07-15 09:40:06.408497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.247 [2024-07-15 09:40:06.408503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.247 [2024-07-15 09:40:06.408508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.247 [2024-07-15 09:40:06.408522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.247 qpair failed and we were unable to recover it. 00:31:19.247 [2024-07-15 09:40:06.418509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.247 [2024-07-15 09:40:06.418556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.247 [2024-07-15 09:40:06.418568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.247 [2024-07-15 09:40:06.418573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.247 [2024-07-15 09:40:06.418578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.247 [2024-07-15 09:40:06.418589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.247 qpair failed and we were unable to recover it. 00:31:19.247 [2024-07-15 09:40:06.428512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.247 [2024-07-15 09:40:06.428568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.247 [2024-07-15 09:40:06.428579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.247 [2024-07-15 09:40:06.428584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.247 [2024-07-15 09:40:06.428588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.247 [2024-07-15 09:40:06.428599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.247 qpair failed and we were unable to recover it. 00:31:19.247 [2024-07-15 09:40:06.438535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.247 [2024-07-15 09:40:06.438581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.247 [2024-07-15 09:40:06.438595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.247 [2024-07-15 09:40:06.438600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.247 [2024-07-15 09:40:06.438604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.247 [2024-07-15 09:40:06.438614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.247 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-15 09:40:06.448557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.509 [2024-07-15 09:40:06.448599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.509 [2024-07-15 09:40:06.448610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.509 [2024-07-15 09:40:06.448615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.509 [2024-07-15 09:40:06.448620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.509 [2024-07-15 09:40:06.448630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-15 09:40:06.458606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.509 [2024-07-15 09:40:06.458649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.509 [2024-07-15 09:40:06.458660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.509 [2024-07-15 09:40:06.458665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.509 [2024-07-15 09:40:06.458670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.509 [2024-07-15 09:40:06.458680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-15 09:40:06.468602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.509 [2024-07-15 09:40:06.468645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.509 [2024-07-15 09:40:06.468656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.509 [2024-07-15 09:40:06.468661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.509 [2024-07-15 09:40:06.468665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.509 [2024-07-15 09:40:06.468675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-15 09:40:06.478627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.509 [2024-07-15 09:40:06.478685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.509 [2024-07-15 09:40:06.478697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.509 [2024-07-15 09:40:06.478703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.509 [2024-07-15 09:40:06.478712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.509 [2024-07-15 09:40:06.478724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-15 09:40:06.488549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.509 [2024-07-15 09:40:06.488611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.509 [2024-07-15 09:40:06.488623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.509 [2024-07-15 09:40:06.488628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.509 [2024-07-15 09:40:06.488632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.509 [2024-07-15 09:40:06.488644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-15 09:40:06.498736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.509 [2024-07-15 09:40:06.498783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.510 [2024-07-15 09:40:06.498795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.510 [2024-07-15 09:40:06.498800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.510 [2024-07-15 09:40:06.498805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.510 [2024-07-15 09:40:06.498816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-15 09:40:06.508784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.510 [2024-07-15 09:40:06.508852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.510 [2024-07-15 09:40:06.508863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.510 [2024-07-15 09:40:06.508868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.510 [2024-07-15 09:40:06.508873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.510 [2024-07-15 09:40:06.508883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-15 09:40:06.518765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.510 [2024-07-15 09:40:06.518849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.510 [2024-07-15 09:40:06.518860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.510 [2024-07-15 09:40:06.518865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.510 [2024-07-15 09:40:06.518870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.510 [2024-07-15 09:40:06.518880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-15 09:40:06.528692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.510 [2024-07-15 09:40:06.528732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.510 [2024-07-15 09:40:06.528743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.510 [2024-07-15 09:40:06.528749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.510 [2024-07-15 09:40:06.528757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.510 [2024-07-15 09:40:06.528768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-15 09:40:06.538792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.510 [2024-07-15 09:40:06.538842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.510 [2024-07-15 09:40:06.538852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.510 [2024-07-15 09:40:06.538857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.510 [2024-07-15 09:40:06.538862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.510 [2024-07-15 09:40:06.538872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-15 09:40:06.548795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.510 [2024-07-15 09:40:06.548835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.510 [2024-07-15 09:40:06.548847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.510 [2024-07-15 09:40:06.548852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.510 [2024-07-15 09:40:06.548856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.510 [2024-07-15 09:40:06.548866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-15 09:40:06.558846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.510 [2024-07-15 09:40:06.558896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.510 [2024-07-15 09:40:06.558908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.510 [2024-07-15 09:40:06.558913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.510 [2024-07-15 09:40:06.558917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.510 [2024-07-15 09:40:06.558928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-15 09:40:06.568804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.510 [2024-07-15 09:40:06.568884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.510 [2024-07-15 09:40:06.568895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.510 [2024-07-15 09:40:06.568903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.510 [2024-07-15 09:40:06.568907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.510 [2024-07-15 09:40:06.568917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-15 09:40:06.578982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.510 [2024-07-15 09:40:06.579053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.510 [2024-07-15 09:40:06.579064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.510 [2024-07-15 09:40:06.579069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.510 [2024-07-15 09:40:06.579074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.510 [2024-07-15 09:40:06.579084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-15 09:40:06.588820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.510 [2024-07-15 09:40:06.588867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.510 [2024-07-15 09:40:06.588877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.510 [2024-07-15 09:40:06.588882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.510 [2024-07-15 09:40:06.588887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.510 [2024-07-15 09:40:06.588897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-15 09:40:06.598952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.510 [2024-07-15 09:40:06.598999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.510 [2024-07-15 09:40:06.599010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.510 [2024-07-15 09:40:06.599015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.510 [2024-07-15 09:40:06.599019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.510 [2024-07-15 09:40:06.599029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-15 09:40:06.608958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.510 [2024-07-15 09:40:06.609003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.510 [2024-07-15 09:40:06.609013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.510 [2024-07-15 09:40:06.609018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.510 [2024-07-15 09:40:06.609023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.510 [2024-07-15 09:40:06.609033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-15 09:40:06.619027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.510 [2024-07-15 09:40:06.619071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.510 [2024-07-15 09:40:06.619083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.510 [2024-07-15 09:40:06.619089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.510 [2024-07-15 09:40:06.619093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.510 [2024-07-15 09:40:06.619104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-15 09:40:06.629040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.510 [2024-07-15 09:40:06.629083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.510 [2024-07-15 09:40:06.629094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.510 [2024-07-15 09:40:06.629100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.510 [2024-07-15 09:40:06.629104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.510 [2024-07-15 09:40:06.629114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-15 09:40:06.639054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.510 [2024-07-15 09:40:06.639099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.510 [2024-07-15 09:40:06.639110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.510 [2024-07-15 09:40:06.639114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.511 [2024-07-15 09:40:06.639119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.511 [2024-07-15 09:40:06.639129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-15 09:40:06.648964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.511 [2024-07-15 09:40:06.649024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.511 [2024-07-15 09:40:06.649035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.511 [2024-07-15 09:40:06.649040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.511 [2024-07-15 09:40:06.649045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.511 [2024-07-15 09:40:06.649055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-15 09:40:06.659160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.511 [2024-07-15 09:40:06.659208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.511 [2024-07-15 09:40:06.659219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.511 [2024-07-15 09:40:06.659227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.511 [2024-07-15 09:40:06.659232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.511 [2024-07-15 09:40:06.659242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-15 09:40:06.669024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.511 [2024-07-15 09:40:06.669067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.511 [2024-07-15 09:40:06.669078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.511 [2024-07-15 09:40:06.669083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.511 [2024-07-15 09:40:06.669087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.511 [2024-07-15 09:40:06.669098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-15 09:40:06.679187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.511 [2024-07-15 09:40:06.679232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.511 [2024-07-15 09:40:06.679243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.511 [2024-07-15 09:40:06.679249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.511 [2024-07-15 09:40:06.679253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.511 [2024-07-15 09:40:06.679264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-15 09:40:06.689211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.511 [2024-07-15 09:40:06.689253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.511 [2024-07-15 09:40:06.689263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.511 [2024-07-15 09:40:06.689268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.511 [2024-07-15 09:40:06.689273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.511 [2024-07-15 09:40:06.689283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-15 09:40:06.699265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.511 [2024-07-15 09:40:06.699344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.511 [2024-07-15 09:40:06.699355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.511 [2024-07-15 09:40:06.699360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.511 [2024-07-15 09:40:06.699365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.511 [2024-07-15 09:40:06.699376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.772 [2024-07-15 09:40:06.709321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.772 [2024-07-15 09:40:06.709392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.772 [2024-07-15 09:40:06.709402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.772 [2024-07-15 09:40:06.709408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.772 [2024-07-15 09:40:06.709412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.772 [2024-07-15 09:40:06.709422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.772 qpair failed and we were unable to recover it. 00:31:19.772 [2024-07-15 09:40:06.719295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.772 [2024-07-15 09:40:06.719376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.772 [2024-07-15 09:40:06.719387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.772 [2024-07-15 09:40:06.719392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.772 [2024-07-15 09:40:06.719396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.772 [2024-07-15 09:40:06.719408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.772 qpair failed and we were unable to recover it. 00:31:19.772 [2024-07-15 09:40:06.729309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.772 [2024-07-15 09:40:06.729363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.772 [2024-07-15 09:40:06.729374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.772 [2024-07-15 09:40:06.729379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.772 [2024-07-15 09:40:06.729384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.772 [2024-07-15 09:40:06.729394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.772 qpair failed and we were unable to recover it. 00:31:19.772 [2024-07-15 09:40:06.739350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.772 [2024-07-15 09:40:06.739392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.772 [2024-07-15 09:40:06.739403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.772 [2024-07-15 09:40:06.739408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.772 [2024-07-15 09:40:06.739413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.772 [2024-07-15 09:40:06.739423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.772 qpair failed and we were unable to recover it. 00:31:19.772 [2024-07-15 09:40:06.749374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.772 [2024-07-15 09:40:06.749418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.772 [2024-07-15 09:40:06.749432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.772 [2024-07-15 09:40:06.749437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.772 [2024-07-15 09:40:06.749441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.772 [2024-07-15 09:40:06.749452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.772 qpair failed and we were unable to recover it. 00:31:19.772 [2024-07-15 09:40:06.759401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.772 [2024-07-15 09:40:06.759456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.772 [2024-07-15 09:40:06.759467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.772 [2024-07-15 09:40:06.759472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.772 [2024-07-15 09:40:06.759476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.772 [2024-07-15 09:40:06.759486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.772 qpair failed and we were unable to recover it. 00:31:19.772 [2024-07-15 09:40:06.769408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.772 [2024-07-15 09:40:06.769454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.772 [2024-07-15 09:40:06.769472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.772 [2024-07-15 09:40:06.769478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.772 [2024-07-15 09:40:06.769483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.772 [2024-07-15 09:40:06.769496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.772 qpair failed and we were unable to recover it. 00:31:19.772 [2024-07-15 09:40:06.779484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.772 [2024-07-15 09:40:06.779536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.772 [2024-07-15 09:40:06.779554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.772 [2024-07-15 09:40:06.779560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.772 [2024-07-15 09:40:06.779565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.772 [2024-07-15 09:40:06.779579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.772 qpair failed and we were unable to recover it. 00:31:19.772 [2024-07-15 09:40:06.789479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.772 [2024-07-15 09:40:06.789528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.772 [2024-07-15 09:40:06.789546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.772 [2024-07-15 09:40:06.789552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.772 [2024-07-15 09:40:06.789557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.772 [2024-07-15 09:40:06.789574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.773 qpair failed and we were unable to recover it. 00:31:19.773 [2024-07-15 09:40:06.799512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.773 [2024-07-15 09:40:06.799567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.773 [2024-07-15 09:40:06.799585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.773 [2024-07-15 09:40:06.799591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.773 [2024-07-15 09:40:06.799596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.773 [2024-07-15 09:40:06.799610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.773 qpair failed and we were unable to recover it. 00:31:19.773 [2024-07-15 09:40:06.809439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.773 [2024-07-15 09:40:06.809484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.773 [2024-07-15 09:40:06.809497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.773 [2024-07-15 09:40:06.809502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.773 [2024-07-15 09:40:06.809506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b58000b90 00:31:19.773 [2024-07-15 09:40:06.809517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.773 qpair failed and we were unable to recover it. 00:31:19.773 [2024-07-15 09:40:06.819635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.773 [2024-07-15 09:40:06.819749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.773 [2024-07-15 09:40:06.819820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.773 [2024-07-15 09:40:06.819845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.773 [2024-07-15 09:40:06.819864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b50000b90 00:31:19.773 [2024-07-15 09:40:06.819916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.773 qpair failed and we were unable to recover it. 00:31:19.773 [2024-07-15 09:40:06.829623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.773 [2024-07-15 09:40:06.829708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.773 [2024-07-15 09:40:06.829743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.773 [2024-07-15 09:40:06.829767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.773 [2024-07-15 09:40:06.829782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b50000b90 00:31:19.773 [2024-07-15 09:40:06.829816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.773 qpair failed and we were unable to recover it. 00:31:19.773 [2024-07-15 09:40:06.839616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.773 [2024-07-15 09:40:06.839709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.773 [2024-07-15 09:40:06.839791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.773 [2024-07-15 09:40:06.839816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.773 [2024-07-15 09:40:06.839835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b60000b90 00:31:19.773 [2024-07-15 09:40:06.839889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:19.773 qpair failed and we were unable to recover it. 00:31:19.773 [2024-07-15 09:40:06.849643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.773 [2024-07-15 09:40:06.849731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.773 [2024-07-15 09:40:06.849771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.773 [2024-07-15 09:40:06.849786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.773 [2024-07-15 09:40:06.849800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8b60000b90 00:31:19.773 [2024-07-15 09:40:06.849832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:19.773 qpair failed and we were unable to recover it. 00:31:19.773 [2024-07-15 09:40:06.850056] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:31:19.773 A controller has encountered a failure and is being reset. 00:31:19.773 [2024-07-15 09:40:06.850106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff8800 (9): Bad file descriptor 00:31:19.773 Controller properly reset. 00:31:19.773 Initializing NVMe Controllers 00:31:19.773 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:19.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:19.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:19.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:19.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:19.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:19.773 Initialization complete. Launching workers. 00:31:19.773 Starting thread on core 1 00:31:19.773 Starting thread on core 2 00:31:19.773 Starting thread on core 3 00:31:19.773 Starting thread on core 0 00:31:19.773 09:40:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:31:19.773 00:31:19.773 real 0m11.476s 00:31:19.773 user 0m21.032s 00:31:19.773 sys 0m3.707s 00:31:19.773 09:40:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:19.773 09:40:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:19.773 ************************************ 00:31:19.773 END TEST nvmf_target_disconnect_tc2 00:31:19.773 ************************************ 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:20.033 rmmod nvme_tcp 00:31:20.033 rmmod nvme_fabrics 00:31:20.033 rmmod nvme_keyring 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 903114 ']' 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 903114 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 903114 ']' 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 903114 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 903114 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 903114' 00:31:20.033 killing process with pid 903114 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 903114 00:31:20.033 09:40:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 903114 00:31:20.293 09:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:20.293 09:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:20.293 09:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:20.293 09:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:20.293 09:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:20.293 09:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.293 09:40:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:20.293 09:40:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.205 09:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:22.205 00:31:22.205 real 0m22.336s 00:31:22.205 user 0m49.390s 00:31:22.205 sys 0m10.170s 00:31:22.205 09:40:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:22.205 09:40:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:22.205 ************************************ 00:31:22.205 END TEST nvmf_target_disconnect 00:31:22.205 ************************************ 00:31:22.205 09:40:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:22.205 09:40:09 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:31:22.205 09:40:09 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:22.205 09:40:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:22.465 09:40:09 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:31:22.465 00:31:22.465 real 23m21.328s 00:31:22.465 user 47m36.869s 00:31:22.465 sys 7m37.106s 00:31:22.465 09:40:09 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:22.465 09:40:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:22.465 ************************************ 00:31:22.465 END TEST nvmf_tcp 00:31:22.465 ************************************ 00:31:22.465 09:40:09 -- common/autotest_common.sh@1142 -- # return 0 00:31:22.465 09:40:09 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:31:22.465 09:40:09 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:22.465 09:40:09 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:22.465 09:40:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:22.465 09:40:09 -- common/autotest_common.sh@10 -- # set +x 00:31:22.465 ************************************ 00:31:22.465 START TEST spdkcli_nvmf_tcp 00:31:22.465 ************************************ 00:31:22.465 09:40:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:22.465 * Looking for test storage... 00:31:22.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:22.465 09:40:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:22.465 09:40:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:22.465 09:40:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:22.465 09:40:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.465 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:31:22.465 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.465 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.465 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=905001 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 905001 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 905001 ']' 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:22.466 09:40:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:22.726 [2024-07-15 09:40:09.702779] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:31:22.726 [2024-07-15 09:40:09.702850] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905001 ] 00:31:22.726 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.726 [2024-07-15 09:40:09.772170] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:22.726 [2024-07-15 09:40:09.838747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.726 [2024-07-15 09:40:09.838750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.295 09:40:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:23.295 09:40:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:31:23.295 09:40:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:23.295 09:40:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:23.295 09:40:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:23.555 09:40:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:23.555 09:40:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:23.555 09:40:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:23.555 09:40:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:23.555 09:40:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:23.555 09:40:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:23.555 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:23.555 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:23.555 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:23.555 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:23.555 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:23.555 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:23.555 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:23.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:23.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:23.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:23.555 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:23.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:23.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:23.555 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:23.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:23.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:23.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:23.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:23.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:23.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:23.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:23.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:23.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:23.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:23.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:23.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:23.555 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:23.555 ' 00:31:26.099 [2024-07-15 09:40:12.830618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.040 [2024-07-15 09:40:13.994393] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:28.955 [2024-07-15 09:40:16.128533] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:30.872 [2024-07-15 09:40:17.966057] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:32.259 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:32.259 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:32.259 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:32.259 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:32.260 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:32.260 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:32.260 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:32.260 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:32.260 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:32.260 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:32.260 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:32.260 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:32.260 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:32.260 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:32.260 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:32.260 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:32.260 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:32.260 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:32.260 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:32.260 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:32.260 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:32.260 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:32.260 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:32.260 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:32.260 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:32.260 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:32.260 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:32.260 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:32.520 09:40:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:32.520 09:40:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:32.520 09:40:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:32.520 09:40:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:32.521 09:40:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:32.521 09:40:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:32.521 09:40:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:32.521 09:40:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:32.781 09:40:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:32.781 09:40:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:32.781 09:40:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:32.781 09:40:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:32.781 09:40:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:32.781 09:40:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:32.781 09:40:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:32.781 09:40:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:32.781 09:40:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:32.781 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:32.781 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:32.781 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:32.781 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:32.781 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:32.781 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:32.781 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:32.781 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:32.781 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:32.781 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:32.781 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:32.781 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:32.781 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:32.781 ' 00:31:38.077 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:38.077 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:38.077 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:38.077 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:38.077 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:38.077 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:38.077 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:38.077 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:38.077 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:38.077 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:38.077 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:38.077 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:38.077 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:38.077 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:38.077 09:40:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:38.077 09:40:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:38.077 09:40:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:38.077 09:40:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 905001 00:31:38.077 09:40:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 905001 ']' 00:31:38.077 09:40:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 905001 00:31:38.077 09:40:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:31:38.077 09:40:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:38.077 09:40:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 905001 00:31:38.077 09:40:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:38.077 09:40:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:38.077 09:40:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 905001' 00:31:38.077 killing process with pid 905001 00:31:38.077 09:40:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 905001 00:31:38.077 09:40:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 905001 00:31:38.077 09:40:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:38.077 09:40:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:38.077 09:40:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 905001 ']' 00:31:38.077 09:40:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 905001 00:31:38.077 09:40:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 905001 ']' 00:31:38.077 09:40:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 905001 00:31:38.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (905001) - No such process 00:31:38.077 09:40:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 905001 is not found' 00:31:38.077 Process with pid 905001 is not found 00:31:38.077 09:40:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:38.077 09:40:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:38.077 09:40:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:38.077 00:31:38.077 real 0m15.536s 00:31:38.077 user 0m31.953s 00:31:38.077 sys 0m0.721s 00:31:38.077 09:40:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:38.077 09:40:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:38.077 ************************************ 00:31:38.077 END TEST spdkcli_nvmf_tcp 00:31:38.077 ************************************ 00:31:38.077 09:40:25 -- common/autotest_common.sh@1142 -- # return 0 00:31:38.077 09:40:25 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:38.077 09:40:25 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:38.077 09:40:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:38.077 09:40:25 -- common/autotest_common.sh@10 -- # set +x 00:31:38.077 ************************************ 00:31:38.077 START TEST nvmf_identify_passthru 00:31:38.077 ************************************ 00:31:38.077 09:40:25 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:38.077 * Looking for test storage... 00:31:38.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:38.077 09:40:25 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:38.077 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:38.077 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:38.077 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:38.077 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:38.077 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:38.077 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:38.077 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:38.077 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:38.077 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:38.077 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:38.077 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:38.077 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:38.077 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:38.077 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:38.077 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:38.077 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:38.077 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:38.077 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:38.077 09:40:25 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:38.077 09:40:25 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:38.078 09:40:25 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:38.078 09:40:25 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.078 09:40:25 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.078 09:40:25 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.078 09:40:25 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:38.078 09:40:25 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.078 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:31:38.078 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:38.078 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:38.078 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:38.078 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:38.078 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:38.078 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:38.078 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:38.078 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:38.078 09:40:25 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:38.078 09:40:25 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:38.078 09:40:25 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:38.078 09:40:25 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:38.078 09:40:25 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.078 09:40:25 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.078 09:40:25 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.078 09:40:25 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:38.078 09:40:25 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.078 09:40:25 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:38.078 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:38.078 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:38.078 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:38.078 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:38.078 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:38.078 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.078 09:40:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:38.078 09:40:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.078 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:38.078 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:38.078 09:40:25 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:31:38.078 09:40:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:46.276 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:46.276 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:46.276 Found net devices under 0000:31:00.0: cvl_0_0 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:46.276 Found net devices under 0000:31:00.1: cvl_0_1 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:46.276 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:46.535 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:46.535 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:46.535 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:46.535 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:46.535 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:46.535 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:46.535 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:46.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:46.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:31:46.535 00:31:46.535 --- 10.0.0.2 ping statistics --- 00:31:46.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.535 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:31:46.535 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:46.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:46.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:31:46.535 00:31:46.535 --- 10.0.0.1 ping statistics --- 00:31:46.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.535 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:31:46.535 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:46.535 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:31:46.535 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:46.535 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:46.535 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:46.535 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:46.535 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:46.535 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:46.535 09:40:33 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:46.535 09:40:33 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:46.535 09:40:33 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:46.535 09:40:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:46.535 09:40:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:46.535 09:40:33 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:31:46.535 09:40:33 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:31:46.535 09:40:33 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:31:46.535 09:40:33 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:31:46.535 09:40:33 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:31:46.535 09:40:33 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:31:46.535 09:40:33 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:46.535 09:40:33 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:46.535 09:40:33 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:31:46.795 09:40:33 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:31:46.795 09:40:33 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:31:46.795 09:40:33 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:31:46.795 09:40:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:31:46.795 09:40:33 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:31:46.795 09:40:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:31:46.795 09:40:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:46.795 09:40:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:46.795 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.364 09:40:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605499 00:31:47.364 09:40:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:31:47.364 09:40:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:47.364 09:40:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:47.364 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.624 09:40:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:31:47.624 09:40:34 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:47.624 09:40:34 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:47.624 09:40:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:47.624 09:40:34 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:47.624 09:40:34 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:47.624 09:40:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:47.624 09:40:34 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=912486 00:31:47.624 09:40:34 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:47.624 09:40:34 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:47.624 09:40:34 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 912486 00:31:47.624 09:40:34 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 912486 ']' 00:31:47.624 09:40:34 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.624 09:40:34 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:47.624 09:40:34 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.624 09:40:34 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:47.624 09:40:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:47.884 [2024-07-15 09:40:34.855924] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:31:47.884 [2024-07-15 09:40:34.855980] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:47.884 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.884 [2024-07-15 09:40:34.931444] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:47.884 [2024-07-15 09:40:35.003250] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:47.884 [2024-07-15 09:40:35.003290] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:47.884 [2024-07-15 09:40:35.003298] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:47.884 [2024-07-15 09:40:35.003304] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:47.884 [2024-07-15 09:40:35.003310] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:47.884 [2024-07-15 09:40:35.003450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.884 [2024-07-15 09:40:35.003566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:47.884 [2024-07-15 09:40:35.003723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.884 [2024-07-15 09:40:35.003723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:48.455 09:40:35 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:48.455 09:40:35 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:31:48.455 09:40:35 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:48.455 09:40:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.455 09:40:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:48.455 INFO: Log level set to 20 00:31:48.455 INFO: Requests: 00:31:48.455 { 00:31:48.455 "jsonrpc": "2.0", 00:31:48.455 "method": "nvmf_set_config", 00:31:48.455 "id": 1, 00:31:48.455 "params": { 00:31:48.455 "admin_cmd_passthru": { 00:31:48.455 "identify_ctrlr": true 00:31:48.455 } 00:31:48.455 } 00:31:48.455 } 00:31:48.455 00:31:48.455 INFO: response: 00:31:48.455 { 00:31:48.455 "jsonrpc": "2.0", 00:31:48.455 "id": 1, 00:31:48.455 "result": true 00:31:48.455 } 00:31:48.455 00:31:48.455 09:40:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.455 09:40:35 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:48.455 09:40:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.455 09:40:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:48.455 INFO: Setting log level to 20 00:31:48.455 INFO: Setting log level to 20 00:31:48.455 INFO: Log level set to 20 00:31:48.455 INFO: Log level set to 20 00:31:48.455 INFO: Requests: 00:31:48.455 { 00:31:48.455 "jsonrpc": "2.0", 00:31:48.455 "method": "framework_start_init", 00:31:48.455 "id": 1 00:31:48.455 } 00:31:48.455 00:31:48.455 INFO: Requests: 00:31:48.455 { 00:31:48.455 "jsonrpc": "2.0", 00:31:48.455 "method": "framework_start_init", 00:31:48.455 "id": 1 00:31:48.455 } 00:31:48.455 00:31:48.715 [2024-07-15 09:40:35.714160] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:48.715 INFO: response: 00:31:48.715 { 00:31:48.715 "jsonrpc": "2.0", 00:31:48.715 "id": 1, 00:31:48.715 "result": true 00:31:48.715 } 00:31:48.715 00:31:48.715 INFO: response: 00:31:48.715 { 00:31:48.715 "jsonrpc": "2.0", 00:31:48.715 "id": 1, 00:31:48.715 "result": true 00:31:48.715 } 00:31:48.715 00:31:48.715 09:40:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.715 09:40:35 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:48.715 09:40:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.715 09:40:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:48.715 INFO: Setting log level to 40 00:31:48.715 INFO: Setting log level to 40 00:31:48.715 INFO: Setting log level to 40 00:31:48.715 [2024-07-15 09:40:35.727470] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:48.715 09:40:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.715 09:40:35 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:48.715 09:40:35 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:48.715 09:40:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:48.715 09:40:35 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:31:48.715 09:40:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.715 09:40:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:48.976 Nvme0n1 00:31:48.976 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.976 09:40:36 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:48.976 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.976 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:48.976 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.976 09:40:36 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:48.976 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.976 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:48.976 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.976 09:40:36 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:48.976 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.976 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:48.976 [2024-07-15 09:40:36.117011] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:48.976 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.976 09:40:36 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:48.976 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.976 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:48.976 [ 00:31:48.976 { 00:31:48.976 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:48.976 "subtype": "Discovery", 00:31:48.976 "listen_addresses": [], 00:31:48.976 "allow_any_host": true, 00:31:48.976 "hosts": [] 00:31:48.976 }, 00:31:48.976 { 00:31:48.976 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:48.976 "subtype": "NVMe", 00:31:48.976 "listen_addresses": [ 00:31:48.976 { 00:31:48.976 "trtype": "TCP", 00:31:48.976 "adrfam": "IPv4", 00:31:48.976 "traddr": "10.0.0.2", 00:31:48.976 "trsvcid": "4420" 00:31:48.976 } 00:31:48.976 ], 00:31:48.976 "allow_any_host": true, 00:31:48.976 "hosts": [], 00:31:48.976 "serial_number": "SPDK00000000000001", 00:31:48.976 "model_number": "SPDK bdev Controller", 00:31:48.976 "max_namespaces": 1, 00:31:48.976 "min_cntlid": 1, 00:31:48.976 "max_cntlid": 65519, 00:31:48.976 "namespaces": [ 00:31:48.976 { 00:31:48.976 "nsid": 1, 00:31:48.976 "bdev_name": "Nvme0n1", 00:31:48.976 "name": "Nvme0n1", 00:31:48.976 "nguid": "363447305260549900253845000000A3", 00:31:48.976 "uuid": "36344730-5260-5499-0025-3845000000a3" 00:31:48.976 } 00:31:48.976 ] 00:31:48.976 } 00:31:48.976 ] 00:31:48.976 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.976 09:40:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:48.976 09:40:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:48.976 09:40:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:49.236 EAL: No free 2048 kB hugepages reported on node 1 00:31:49.236 09:40:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605499 00:31:49.236 09:40:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:49.236 09:40:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:49.236 09:40:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:49.236 EAL: No free 2048 kB hugepages reported on node 1 00:31:49.495 09:40:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:31:49.495 09:40:36 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605499 '!=' S64GNE0R605499 ']' 00:31:49.495 09:40:36 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:31:49.495 09:40:36 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:49.495 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.495 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:49.496 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.496 09:40:36 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:49.496 09:40:36 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:49.496 09:40:36 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:49.496 09:40:36 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:31:49.496 09:40:36 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:49.496 09:40:36 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:31:49.496 09:40:36 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:49.496 09:40:36 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:49.496 rmmod nvme_tcp 00:31:49.496 rmmod nvme_fabrics 00:31:49.496 rmmod nvme_keyring 00:31:49.496 09:40:36 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:49.496 09:40:36 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:31:49.496 09:40:36 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:31:49.496 09:40:36 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 912486 ']' 00:31:49.496 09:40:36 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 912486 00:31:49.496 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 912486 ']' 00:31:49.496 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 912486 00:31:49.496 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:31:49.496 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:49.496 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 912486 00:31:49.755 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:49.755 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:49.755 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 912486' 00:31:49.755 killing process with pid 912486 00:31:49.755 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 912486 00:31:49.755 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 912486 00:31:50.014 09:40:36 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:50.014 09:40:36 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:50.014 09:40:36 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:50.014 09:40:36 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:50.014 09:40:36 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:50.014 09:40:36 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.014 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:50.015 09:40:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:51.927 09:40:39 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:51.927 00:31:51.927 real 0m13.950s 00:31:51.927 user 0m10.588s 00:31:51.927 sys 0m7.016s 00:31:51.927 09:40:39 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:51.927 09:40:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:51.927 ************************************ 00:31:51.927 END TEST nvmf_identify_passthru 00:31:51.927 ************************************ 00:31:51.927 09:40:39 -- common/autotest_common.sh@1142 -- # return 0 00:31:51.927 09:40:39 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:51.927 09:40:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:51.927 09:40:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:51.927 09:40:39 -- common/autotest_common.sh@10 -- # set +x 00:31:52.187 ************************************ 00:31:52.187 START TEST nvmf_dif 00:31:52.187 ************************************ 00:31:52.187 09:40:39 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:52.187 * Looking for test storage... 00:31:52.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:52.187 09:40:39 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:52.187 09:40:39 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:52.187 09:40:39 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:52.187 09:40:39 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:52.187 09:40:39 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:52.187 09:40:39 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:52.187 09:40:39 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:52.187 09:40:39 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:52.187 09:40:39 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:52.187 09:40:39 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:52.187 09:40:39 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:52.187 09:40:39 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:52.187 09:40:39 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:52.187 09:40:39 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:52.187 09:40:39 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:52.187 09:40:39 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:52.187 09:40:39 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:52.187 09:40:39 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:52.187 09:40:39 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:52.187 09:40:39 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:52.187 09:40:39 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:52.187 09:40:39 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:52.187 09:40:39 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.187 09:40:39 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.188 09:40:39 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.188 09:40:39 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:52.188 09:40:39 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.188 09:40:39 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:31:52.188 09:40:39 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:52.188 09:40:39 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:52.188 09:40:39 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:52.188 09:40:39 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:52.188 09:40:39 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:52.188 09:40:39 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:52.188 09:40:39 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:52.188 09:40:39 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:52.188 09:40:39 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:52.188 09:40:39 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:52.188 09:40:39 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:52.188 09:40:39 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:52.188 09:40:39 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:52.188 09:40:39 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:52.188 09:40:39 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:52.188 09:40:39 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:52.188 09:40:39 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:52.188 09:40:39 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:52.188 09:40:39 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.188 09:40:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:52.188 09:40:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.188 09:40:39 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:52.188 09:40:39 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:52.188 09:40:39 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:31:52.188 09:40:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:00.342 09:40:47 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:00.343 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:00.343 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:00.343 Found net devices under 0000:31:00.0: cvl_0_0 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:00.343 Found net devices under 0000:31:00.1: cvl_0_1 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:00.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.554 ms 00:32:00.343 00:32:00.343 --- 10.0.0.2 ping statistics --- 00:32:00.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.343 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:00.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:32:00.343 00:32:00.343 --- 10.0.0.1 ping statistics --- 00:32:00.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.343 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:00.343 09:40:47 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:04.549 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:04.549 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:04.549 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:04.549 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:04.550 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:04.550 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:04.550 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:04.550 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:04.550 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:04.550 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:32:04.550 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:04.550 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:04.550 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:04.550 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:04.550 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:04.550 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:04.550 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:04.550 09:40:51 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:04.550 09:40:51 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:04.550 09:40:51 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:04.550 09:40:51 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:04.550 09:40:51 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:04.550 09:40:51 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:04.550 09:40:51 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:04.550 09:40:51 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:32:04.550 09:40:51 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:04.550 09:40:51 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:04.550 09:40:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:04.550 09:40:51 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=919186 00:32:04.550 09:40:51 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 919186 00:32:04.550 09:40:51 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:04.550 09:40:51 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 919186 ']' 00:32:04.550 09:40:51 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:04.550 09:40:51 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:04.550 09:40:51 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:04.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:04.550 09:40:51 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:04.550 09:40:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:04.550 [2024-07-15 09:40:51.260921] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:32:04.550 [2024-07-15 09:40:51.260978] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:04.550 EAL: No free 2048 kB hugepages reported on node 1 00:32:04.550 [2024-07-15 09:40:51.334964] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.550 [2024-07-15 09:40:51.400397] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:04.550 [2024-07-15 09:40:51.400434] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:04.550 [2024-07-15 09:40:51.400442] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:04.550 [2024-07-15 09:40:51.400448] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:04.550 [2024-07-15 09:40:51.400454] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:04.550 [2024-07-15 09:40:51.400477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.121 09:40:52 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:05.121 09:40:52 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:32:05.121 09:40:52 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:05.121 09:40:52 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:05.121 09:40:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 09:40:52 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:05.121 09:40:52 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:32:05.121 09:40:52 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:05.121 09:40:52 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.121 09:40:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 [2024-07-15 09:40:52.062650] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:05.121 09:40:52 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.121 09:40:52 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:05.121 09:40:52 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:05.121 09:40:52 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:05.121 09:40:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 ************************************ 00:32:05.121 START TEST fio_dif_1_default 00:32:05.121 ************************************ 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 bdev_null0 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 [2024-07-15 09:40:52.146978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:05.121 { 00:32:05.121 "params": { 00:32:05.121 "name": "Nvme$subsystem", 00:32:05.121 "trtype": "$TEST_TRANSPORT", 00:32:05.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:05.121 "adrfam": "ipv4", 00:32:05.121 "trsvcid": "$NVMF_PORT", 00:32:05.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:05.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:05.121 "hdgst": ${hdgst:-false}, 00:32:05.121 "ddgst": ${ddgst:-false} 00:32:05.121 }, 00:32:05.121 "method": "bdev_nvme_attach_controller" 00:32:05.121 } 00:32:05.121 EOF 00:32:05.121 )") 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:05.121 "params": { 00:32:05.121 "name": "Nvme0", 00:32:05.121 "trtype": "tcp", 00:32:05.121 "traddr": "10.0.0.2", 00:32:05.121 "adrfam": "ipv4", 00:32:05.121 "trsvcid": "4420", 00:32:05.121 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:05.121 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:05.121 "hdgst": false, 00:32:05.121 "ddgst": false 00:32:05.121 }, 00:32:05.121 "method": "bdev_nvme_attach_controller" 00:32:05.121 }' 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:05.121 09:40:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:05.393 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:05.393 fio-3.35 00:32:05.393 Starting 1 thread 00:32:05.653 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.881 00:32:17.881 filename0: (groupid=0, jobs=1): err= 0: pid=919798: Mon Jul 15 09:41:03 2024 00:32:17.881 read: IOPS=95, BW=384KiB/s (393kB/s)(3840KiB/10007msec) 00:32:17.881 slat (nsec): min=5402, max=35217, avg=6274.96, stdev=1895.13 00:32:17.881 clat (usec): min=40899, max=42952, avg=41677.42, stdev=463.34 00:32:17.881 lat (usec): min=40905, max=42960, avg=41683.70, stdev=463.53 00:32:17.881 clat percentiles (usec): 00:32:17.881 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:17.881 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:32:17.881 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:17.881 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:32:17.881 | 99.99th=[42730] 00:32:17.881 bw ( KiB/s): min= 352, max= 416, per=99.55%, avg=382.40, stdev=12.61, samples=20 00:32:17.881 iops : min= 88, max= 104, avg=95.60, stdev= 3.15, samples=20 00:32:17.881 lat (msec) : 50=100.00% 00:32:17.881 cpu : usr=95.38%, sys=4.42%, ctx=17, majf=0, minf=248 00:32:17.881 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.881 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.881 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:17.881 00:32:17.881 Run status group 0 (all jobs): 00:32:17.881 READ: bw=384KiB/s (393kB/s), 384KiB/s-384KiB/s (393kB/s-393kB/s), io=3840KiB (3932kB), run=10007-10007msec 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.881 00:32:17.881 real 0m11.263s 00:32:17.881 user 0m21.774s 00:32:17.881 sys 0m0.787s 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:17.881 ************************************ 00:32:17.881 END TEST fio_dif_1_default 00:32:17.881 ************************************ 00:32:17.881 09:41:03 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:32:17.881 09:41:03 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:17.881 09:41:03 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:17.881 09:41:03 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:17.881 09:41:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:17.881 ************************************ 00:32:17.881 START TEST fio_dif_1_multi_subsystems 00:32:17.881 ************************************ 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:17.881 bdev_null0 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:17.881 [2024-07-15 09:41:03.486657] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:17.881 bdev_null1 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.881 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:17.882 { 00:32:17.882 "params": { 00:32:17.882 "name": "Nvme$subsystem", 00:32:17.882 "trtype": "$TEST_TRANSPORT", 00:32:17.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:17.882 "adrfam": "ipv4", 00:32:17.882 "trsvcid": "$NVMF_PORT", 00:32:17.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:17.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:17.882 "hdgst": ${hdgst:-false}, 00:32:17.882 "ddgst": ${ddgst:-false} 00:32:17.882 }, 00:32:17.882 "method": "bdev_nvme_attach_controller" 00:32:17.882 } 00:32:17.882 EOF 00:32:17.882 )") 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:17.882 { 00:32:17.882 "params": { 00:32:17.882 "name": "Nvme$subsystem", 00:32:17.882 "trtype": "$TEST_TRANSPORT", 00:32:17.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:17.882 "adrfam": "ipv4", 00:32:17.882 "trsvcid": "$NVMF_PORT", 00:32:17.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:17.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:17.882 "hdgst": ${hdgst:-false}, 00:32:17.882 "ddgst": ${ddgst:-false} 00:32:17.882 }, 00:32:17.882 "method": "bdev_nvme_attach_controller" 00:32:17.882 } 00:32:17.882 EOF 00:32:17.882 )") 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:17.882 "params": { 00:32:17.882 "name": "Nvme0", 00:32:17.882 "trtype": "tcp", 00:32:17.882 "traddr": "10.0.0.2", 00:32:17.882 "adrfam": "ipv4", 00:32:17.882 "trsvcid": "4420", 00:32:17.882 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:17.882 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:17.882 "hdgst": false, 00:32:17.882 "ddgst": false 00:32:17.882 }, 00:32:17.882 "method": "bdev_nvme_attach_controller" 00:32:17.882 },{ 00:32:17.882 "params": { 00:32:17.882 "name": "Nvme1", 00:32:17.882 "trtype": "tcp", 00:32:17.882 "traddr": "10.0.0.2", 00:32:17.882 "adrfam": "ipv4", 00:32:17.882 "trsvcid": "4420", 00:32:17.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:17.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:17.882 "hdgst": false, 00:32:17.882 "ddgst": false 00:32:17.882 }, 00:32:17.882 "method": "bdev_nvme_attach_controller" 00:32:17.882 }' 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:17.882 09:41:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:17.882 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:17.882 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:17.882 fio-3.35 00:32:17.882 Starting 2 threads 00:32:17.882 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.883 00:32:27.883 filename0: (groupid=0, jobs=1): err= 0: pid=922068: Mon Jul 15 09:41:14 2024 00:32:27.883 read: IOPS=185, BW=743KiB/s (760kB/s)(7456KiB/10040msec) 00:32:27.883 slat (nsec): min=5409, max=40768, avg=6217.88, stdev=1408.11 00:32:27.883 clat (usec): min=719, max=43396, avg=21526.77, stdev=20465.97 00:32:27.883 lat (usec): min=727, max=43436, avg=21532.99, stdev=20465.96 00:32:27.883 clat percentiles (usec): 00:32:27.883 | 1.00th=[ 873], 5.00th=[ 930], 10.00th=[ 947], 20.00th=[ 971], 00:32:27.883 | 30.00th=[ 988], 40.00th=[ 1012], 50.00th=[41157], 60.00th=[41681], 00:32:27.883 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:27.883 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:32:27.883 | 99.99th=[43254] 00:32:27.883 bw ( KiB/s): min= 672, max= 768, per=65.94%, avg=744.00, stdev=34.24, samples=20 00:32:27.883 iops : min= 168, max= 192, avg=186.00, stdev= 8.56, samples=20 00:32:27.883 lat (usec) : 750=0.27%, 1000=36.05% 00:32:27.883 lat (msec) : 2=13.47%, 50=50.21% 00:32:27.883 cpu : usr=96.50%, sys=3.29%, ctx=15, majf=0, minf=105 00:32:27.883 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.883 issued rwts: total=1864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.883 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:27.883 filename1: (groupid=0, jobs=1): err= 0: pid=922069: Mon Jul 15 09:41:14 2024 00:32:27.883 read: IOPS=96, BW=386KiB/s (396kB/s)(3872KiB/10022msec) 00:32:27.883 slat (nsec): min=5399, max=33649, avg=6306.84, stdev=1559.03 00:32:27.883 clat (usec): min=40855, max=43250, avg=41393.45, stdev=501.75 00:32:27.883 lat (usec): min=40863, max=43284, avg=41399.75, stdev=501.47 00:32:27.884 clat percentiles (usec): 00:32:27.884 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:27.884 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:32:27.884 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:27.884 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:32:27.884 | 99.99th=[43254] 00:32:27.884 bw ( KiB/s): min= 384, max= 416, per=34.12%, avg=385.60, stdev= 7.16, samples=20 00:32:27.884 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:32:27.884 lat (msec) : 50=100.00% 00:32:27.884 cpu : usr=96.79%, sys=2.99%, ctx=9, majf=0, minf=135 00:32:27.884 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.884 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.884 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:27.884 00:32:27.884 Run status group 0 (all jobs): 00:32:27.884 READ: bw=1128KiB/s (1155kB/s), 386KiB/s-743KiB/s (396kB/s-760kB/s), io=11.1MiB (11.6MB), run=10022-10040msec 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.884 00:32:27.884 real 0m11.492s 00:32:27.884 user 0m35.416s 00:32:27.884 sys 0m0.930s 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:27.884 09:41:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:27.884 ************************************ 00:32:27.884 END TEST fio_dif_1_multi_subsystems 00:32:27.884 ************************************ 00:32:27.884 09:41:14 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:32:27.884 09:41:14 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:27.884 09:41:14 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:27.884 09:41:14 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:27.884 09:41:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:27.884 ************************************ 00:32:27.884 START TEST fio_dif_rand_params 00:32:27.884 ************************************ 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:27.884 bdev_null0 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:27.884 [2024-07-15 09:41:15.056957] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:27.884 { 00:32:27.884 "params": { 00:32:27.884 "name": "Nvme$subsystem", 00:32:27.884 "trtype": "$TEST_TRANSPORT", 00:32:27.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:27.884 "adrfam": "ipv4", 00:32:27.884 "trsvcid": "$NVMF_PORT", 00:32:27.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:27.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:27.884 "hdgst": ${hdgst:-false}, 00:32:27.884 "ddgst": ${ddgst:-false} 00:32:27.884 }, 00:32:27.884 "method": "bdev_nvme_attach_controller" 00:32:27.884 } 00:32:27.884 EOF 00:32:27.884 )") 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:27.884 09:41:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:27.884 "params": { 00:32:27.884 "name": "Nvme0", 00:32:27.884 "trtype": "tcp", 00:32:27.884 "traddr": "10.0.0.2", 00:32:27.884 "adrfam": "ipv4", 00:32:27.884 "trsvcid": "4420", 00:32:27.884 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:27.884 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:27.884 "hdgst": false, 00:32:27.884 "ddgst": false 00:32:27.884 }, 00:32:27.884 "method": "bdev_nvme_attach_controller" 00:32:27.884 }' 00:32:28.179 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:28.179 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:28.179 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:28.179 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:28.179 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:28.179 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:28.179 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:28.179 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:28.179 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:28.179 09:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:28.446 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:28.446 ... 00:32:28.446 fio-3.35 00:32:28.446 Starting 3 threads 00:32:28.446 EAL: No free 2048 kB hugepages reported on node 1 00:32:35.026 00:32:35.026 filename0: (groupid=0, jobs=1): err= 0: pid=924437: Mon Jul 15 09:41:21 2024 00:32:35.026 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(130MiB/5043msec) 00:32:35.026 slat (nsec): min=5429, max=36203, avg=6355.01, stdev=1522.37 00:32:35.026 clat (usec): min=5634, max=90335, avg=14553.70, stdev=10934.61 00:32:35.026 lat (usec): min=5640, max=90341, avg=14560.06, stdev=10934.58 00:32:35.026 clat percentiles (usec): 00:32:35.026 | 1.00th=[ 5800], 5.00th=[ 6521], 10.00th=[ 7439], 20.00th=[ 8586], 00:32:35.026 | 30.00th=[10159], 40.00th=[11207], 50.00th=[11863], 60.00th=[12518], 00:32:35.026 | 70.00th=[13829], 80.00th=[15139], 90.00th=[16909], 95.00th=[48497], 00:32:35.026 | 99.00th=[52167], 99.50th=[53216], 99.90th=[55837], 99.95th=[90702], 00:32:35.026 | 99.99th=[90702] 00:32:35.026 bw ( KiB/s): min=17664, max=32512, per=31.61%, avg=26470.40, stdev=4173.84, samples=10 00:32:35.026 iops : min= 138, max= 254, avg=206.80, stdev=32.61, samples=10 00:32:35.026 lat (msec) : 10=28.47%, 20=63.61%, 50=5.21%, 100=2.70% 00:32:35.026 cpu : usr=95.91%, sys=3.87%, ctx=13, majf=0, minf=77 00:32:35.026 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:35.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:35.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:35.026 issued rwts: total=1036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:35.026 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:35.026 filename0: (groupid=0, jobs=1): err= 0: pid=924438: Mon Jul 15 09:41:21 2024 00:32:35.026 read: IOPS=182, BW=22.8MiB/s (24.0MB/s)(115MiB/5045msec) 00:32:35.026 slat (nsec): min=5660, max=54558, avg=8841.29, stdev=3459.99 00:32:35.026 clat (usec): min=5491, max=93042, avg=16357.40, stdev=13408.22 00:32:35.026 lat (usec): min=5500, max=93051, avg=16366.24, stdev=13408.47 00:32:35.026 clat percentiles (usec): 00:32:35.026 | 1.00th=[ 6325], 5.00th=[ 6783], 10.00th=[ 7635], 20.00th=[ 8979], 00:32:35.026 | 30.00th=[10552], 40.00th=[11731], 50.00th=[12518], 60.00th=[13698], 00:32:35.026 | 70.00th=[15270], 80.00th=[16581], 90.00th=[46924], 95.00th=[51119], 00:32:35.026 | 99.00th=[57410], 99.50th=[89654], 99.90th=[92799], 99.95th=[92799], 00:32:35.026 | 99.99th=[92799] 00:32:35.026 bw ( KiB/s): min=18432, max=31744, per=28.10%, avg=23530.60, stdev=4532.86, samples=10 00:32:35.026 iops : min= 144, max= 248, avg=183.80, stdev=35.43, samples=10 00:32:35.026 lat (msec) : 10=26.03%, 20=63.77%, 50=3.90%, 100=6.29% 00:32:35.026 cpu : usr=86.60%, sys=8.19%, ctx=394, majf=0, minf=117 00:32:35.026 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:35.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:35.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:35.026 issued rwts: total=922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:35.026 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:35.026 filename0: (groupid=0, jobs=1): err= 0: pid=924439: Mon Jul 15 09:41:21 2024 00:32:35.026 read: IOPS=268, BW=33.5MiB/s (35.1MB/s)(168MiB/5005msec) 00:32:35.026 slat (nsec): min=7912, max=32556, avg=8634.04, stdev=1086.37 00:32:35.026 clat (usec): min=4016, max=93191, avg=11175.09, stdev=11290.07 00:32:35.026 lat (usec): min=4024, max=93200, avg=11183.73, stdev=11290.06 00:32:35.026 clat percentiles (usec): 00:32:35.026 | 1.00th=[ 4490], 5.00th=[ 4948], 10.00th=[ 5735], 20.00th=[ 6849], 00:32:35.026 | 30.00th=[ 7439], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 8979], 00:32:35.026 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[47973], 00:32:35.026 | 99.00th=[51643], 99.50th=[52167], 99.90th=[90702], 99.95th=[92799], 00:32:35.026 | 99.99th=[92799] 00:32:35.026 bw ( KiB/s): min=23296, max=43264, per=40.97%, avg=34304.00, stdev=6767.75, samples=10 00:32:35.026 iops : min= 182, max= 338, avg=268.00, stdev=52.87, samples=10 00:32:35.026 lat (msec) : 10=81.59%, 20=11.33%, 50=4.84%, 100=2.24% 00:32:35.026 cpu : usr=96.40%, sys=3.36%, ctx=13, majf=0, minf=100 00:32:35.026 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:35.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:35.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:35.026 issued rwts: total=1342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:35.026 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:35.026 00:32:35.026 Run status group 0 (all jobs): 00:32:35.026 READ: bw=81.8MiB/s (85.7MB/s), 22.8MiB/s-33.5MiB/s (24.0MB/s-35.1MB/s), io=413MiB (433MB), run=5005-5045msec 00:32:35.026 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:35.026 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:35.026 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:35.026 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:35.026 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:35.026 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:35.026 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.026 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:35.026 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.026 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:35.026 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.026 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:35.026 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.026 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:35.026 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:35.026 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:35.026 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:35.026 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:35.027 bdev_null0 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:35.027 [2024-07-15 09:41:21.348274] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:35.027 bdev_null1 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:35.027 bdev_null2 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:35.027 { 00:32:35.027 "params": { 00:32:35.027 "name": "Nvme$subsystem", 00:32:35.027 "trtype": "$TEST_TRANSPORT", 00:32:35.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:35.027 "adrfam": "ipv4", 00:32:35.027 "trsvcid": "$NVMF_PORT", 00:32:35.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:35.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:35.027 "hdgst": ${hdgst:-false}, 00:32:35.027 "ddgst": ${ddgst:-false} 00:32:35.027 }, 00:32:35.027 "method": "bdev_nvme_attach_controller" 00:32:35.027 } 00:32:35.027 EOF 00:32:35.027 )") 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:35.027 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:35.028 { 00:32:35.028 "params": { 00:32:35.028 "name": "Nvme$subsystem", 00:32:35.028 "trtype": "$TEST_TRANSPORT", 00:32:35.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:35.028 "adrfam": "ipv4", 00:32:35.028 "trsvcid": "$NVMF_PORT", 00:32:35.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:35.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:35.028 "hdgst": ${hdgst:-false}, 00:32:35.028 "ddgst": ${ddgst:-false} 00:32:35.028 }, 00:32:35.028 "method": "bdev_nvme_attach_controller" 00:32:35.028 } 00:32:35.028 EOF 00:32:35.028 )") 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:35.028 { 00:32:35.028 "params": { 00:32:35.028 "name": "Nvme$subsystem", 00:32:35.028 "trtype": "$TEST_TRANSPORT", 00:32:35.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:35.028 "adrfam": "ipv4", 00:32:35.028 "trsvcid": "$NVMF_PORT", 00:32:35.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:35.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:35.028 "hdgst": ${hdgst:-false}, 00:32:35.028 "ddgst": ${ddgst:-false} 00:32:35.028 }, 00:32:35.028 "method": "bdev_nvme_attach_controller" 00:32:35.028 } 00:32:35.028 EOF 00:32:35.028 )") 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:35.028 "params": { 00:32:35.028 "name": "Nvme0", 00:32:35.028 "trtype": "tcp", 00:32:35.028 "traddr": "10.0.0.2", 00:32:35.028 "adrfam": "ipv4", 00:32:35.028 "trsvcid": "4420", 00:32:35.028 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:35.028 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:35.028 "hdgst": false, 00:32:35.028 "ddgst": false 00:32:35.028 }, 00:32:35.028 "method": "bdev_nvme_attach_controller" 00:32:35.028 },{ 00:32:35.028 "params": { 00:32:35.028 "name": "Nvme1", 00:32:35.028 "trtype": "tcp", 00:32:35.028 "traddr": "10.0.0.2", 00:32:35.028 "adrfam": "ipv4", 00:32:35.028 "trsvcid": "4420", 00:32:35.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:35.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:35.028 "hdgst": false, 00:32:35.028 "ddgst": false 00:32:35.028 }, 00:32:35.028 "method": "bdev_nvme_attach_controller" 00:32:35.028 },{ 00:32:35.028 "params": { 00:32:35.028 "name": "Nvme2", 00:32:35.028 "trtype": "tcp", 00:32:35.028 "traddr": "10.0.0.2", 00:32:35.028 "adrfam": "ipv4", 00:32:35.028 "trsvcid": "4420", 00:32:35.028 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:35.028 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:35.028 "hdgst": false, 00:32:35.028 "ddgst": false 00:32:35.028 }, 00:32:35.028 "method": "bdev_nvme_attach_controller" 00:32:35.028 }' 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:35.028 09:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:35.028 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:35.028 ... 00:32:35.028 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:35.028 ... 00:32:35.028 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:35.028 ... 00:32:35.028 fio-3.35 00:32:35.028 Starting 24 threads 00:32:35.028 EAL: No free 2048 kB hugepages reported on node 1 00:32:47.310 00:32:47.310 filename0: (groupid=0, jobs=1): err= 0: pid=925815: Mon Jul 15 09:41:32 2024 00:32:47.310 read: IOPS=505, BW=2022KiB/s (2071kB/s)(19.8MiB/10012msec) 00:32:47.310 slat (nsec): min=5589, max=45977, avg=8673.99, stdev=4680.96 00:32:47.310 clat (usec): min=3599, max=35186, avg=31568.64, stdev=4176.06 00:32:47.310 lat (usec): min=3624, max=35193, avg=31577.32, stdev=4175.21 00:32:47.311 clat percentiles (usec): 00:32:47.311 | 1.00th=[ 4883], 5.00th=[30540], 10.00th=[31851], 20.00th=[31851], 00:32:47.311 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.311 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:32:47.311 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:32:47.311 | 99.99th=[35390] 00:32:47.311 bw ( KiB/s): min= 1920, max= 2736, per=4.25%, avg=2018.40, stdev=180.59, samples=20 00:32:47.311 iops : min= 480, max= 684, avg=504.60, stdev=45.15, samples=20 00:32:47.311 lat (msec) : 4=0.47%, 10=1.30%, 20=0.87%, 50=97.35% 00:32:47.311 cpu : usr=99.00%, sys=0.69%, ctx=43, majf=0, minf=53 00:32:47.311 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:32:47.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.311 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.311 issued rwts: total=5062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.311 filename0: (groupid=0, jobs=1): err= 0: pid=925816: Mon Jul 15 09:41:32 2024 00:32:47.311 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10004msec) 00:32:47.311 slat (nsec): min=5637, max=94605, avg=19006.34, stdev=15021.83 00:32:47.311 clat (usec): min=17903, max=35893, avg=32226.40, stdev=1381.99 00:32:47.311 lat (usec): min=17912, max=35913, avg=32245.41, stdev=1381.43 00:32:47.311 clat percentiles (usec): 00:32:47.311 | 1.00th=[29230], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:32:47.311 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.311 | 70.00th=[32637], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:32:47.311 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:32:47.311 | 99.99th=[35914] 00:32:47.311 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=1973.89, stdev=64.93, samples=19 00:32:47.311 iops : min= 480, max= 512, avg=493.47, stdev=16.23, samples=19 00:32:47.311 lat (msec) : 20=0.32%, 50=99.68% 00:32:47.311 cpu : usr=99.32%, sys=0.41%, ctx=17, majf=0, minf=39 00:32:47.311 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:47.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.311 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.311 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.311 filename0: (groupid=0, jobs=1): err= 0: pid=925817: Mon Jul 15 09:41:32 2024 00:32:47.311 read: IOPS=506, BW=2025KiB/s (2073kB/s)(19.8MiB/10020msec) 00:32:47.311 slat (nsec): min=2804, max=57862, avg=7624.94, stdev=3438.32 00:32:47.311 clat (usec): min=2437, max=35218, avg=31543.36, stdev=4236.50 00:32:47.311 lat (usec): min=2442, max=35227, avg=31550.98, stdev=4236.68 00:32:47.311 clat percentiles (usec): 00:32:47.311 | 1.00th=[ 4555], 5.00th=[27395], 10.00th=[31851], 20.00th=[32113], 00:32:47.311 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:32:47.311 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:32:47.311 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:32:47.311 | 99.99th=[35390] 00:32:47.311 bw ( KiB/s): min= 1920, max= 2816, per=4.26%, avg=2022.40, stdev=197.43, samples=20 00:32:47.311 iops : min= 480, max= 704, avg=505.60, stdev=49.36, samples=20 00:32:47.311 lat (msec) : 4=0.49%, 10=1.20%, 20=1.14%, 50=97.16% 00:32:47.311 cpu : usr=98.91%, sys=0.69%, ctx=65, majf=0, minf=146 00:32:47.311 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:32:47.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.311 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.311 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.311 filename0: (groupid=0, jobs=1): err= 0: pid=925818: Mon Jul 15 09:41:32 2024 00:32:47.311 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.2MiB/10003msec) 00:32:47.311 slat (usec): min=5, max=112, avg=18.20, stdev=16.00 00:32:47.311 clat (usec): min=20288, max=53416, avg=32332.24, stdev=1700.61 00:32:47.311 lat (usec): min=20295, max=53437, avg=32350.44, stdev=1699.74 00:32:47.311 clat percentiles (usec): 00:32:47.311 | 1.00th=[28967], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:32:47.311 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.311 | 70.00th=[32637], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:32:47.311 | 99.00th=[34341], 99.50th=[37487], 99.90th=[53216], 99.95th=[53216], 00:32:47.311 | 99.99th=[53216] 00:32:47.311 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1967.16, stdev=76.45, samples=19 00:32:47.311 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:32:47.311 lat (msec) : 50=99.68%, 100=0.32% 00:32:47.311 cpu : usr=99.27%, sys=0.47%, ctx=14, majf=0, minf=42 00:32:47.311 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:47.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.311 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.311 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.311 filename0: (groupid=0, jobs=1): err= 0: pid=925819: Mon Jul 15 09:41:32 2024 00:32:47.311 read: IOPS=494, BW=1976KiB/s (2023kB/s)(19.3MiB/10008msec) 00:32:47.311 slat (nsec): min=5418, max=57406, avg=12420.18, stdev=8104.74 00:32:47.311 clat (usec): min=11596, max=55262, avg=32276.21, stdev=2091.44 00:32:47.311 lat (usec): min=11602, max=55278, avg=32288.63, stdev=2091.64 00:32:47.311 clat percentiles (usec): 00:32:47.311 | 1.00th=[28967], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:32:47.311 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.311 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:32:47.311 | 99.00th=[34341], 99.50th=[35390], 99.90th=[55313], 99.95th=[55313], 00:32:47.311 | 99.99th=[55313] 00:32:47.311 bw ( KiB/s): min= 1795, max= 2048, per=4.14%, avg=1967.32, stdev=76.07, samples=19 00:32:47.311 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:32:47.311 lat (msec) : 20=0.65%, 50=99.03%, 100=0.32% 00:32:47.311 cpu : usr=99.02%, sys=0.65%, ctx=43, majf=0, minf=84 00:32:47.311 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:47.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.311 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.311 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.311 filename0: (groupid=0, jobs=1): err= 0: pid=925820: Mon Jul 15 09:41:32 2024 00:32:47.311 read: IOPS=494, BW=1976KiB/s (2024kB/s)(19.3MiB/10007msec) 00:32:47.311 slat (nsec): min=5473, max=59725, avg=15577.57, stdev=9753.99 00:32:47.311 clat (usec): min=7180, max=60985, avg=32243.12, stdev=2639.91 00:32:47.311 lat (usec): min=7186, max=61003, avg=32258.70, stdev=2640.41 00:32:47.311 clat percentiles (usec): 00:32:47.311 | 1.00th=[28967], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:32:47.311 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.311 | 70.00th=[32637], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:32:47.311 | 99.00th=[34341], 99.50th=[34866], 99.90th=[61080], 99.95th=[61080], 00:32:47.311 | 99.99th=[61080] 00:32:47.311 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=1960.58, stdev=74.17, samples=19 00:32:47.311 iops : min= 448, max= 512, avg=490.11, stdev=18.64, samples=19 00:32:47.311 lat (msec) : 10=0.32%, 20=0.65%, 50=98.71%, 100=0.32% 00:32:47.311 cpu : usr=99.22%, sys=0.52%, ctx=17, majf=0, minf=63 00:32:47.311 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:47.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.311 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.311 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.311 filename0: (groupid=0, jobs=1): err= 0: pid=925821: Mon Jul 15 09:41:32 2024 00:32:47.311 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10004msec) 00:32:47.311 slat (usec): min=5, max=105, avg=24.28, stdev=17.61 00:32:47.311 clat (usec): min=16354, max=45657, avg=32157.97, stdev=1567.21 00:32:47.311 lat (usec): min=16360, max=45671, avg=32182.25, stdev=1567.13 00:32:47.311 clat percentiles (usec): 00:32:47.311 | 1.00th=[22676], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:32:47.311 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.311 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:32:47.311 | 99.00th=[34341], 99.50th=[35914], 99.90th=[40633], 99.95th=[41157], 00:32:47.311 | 99.99th=[45876] 00:32:47.311 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=1973.89, stdev=64.93, samples=19 00:32:47.311 iops : min= 480, max= 512, avg=493.47, stdev=16.23, samples=19 00:32:47.311 lat (msec) : 20=0.65%, 50=99.35% 00:32:47.311 cpu : usr=98.76%, sys=0.90%, ctx=72, majf=0, minf=50 00:32:47.311 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:47.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.311 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.311 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.311 filename0: (groupid=0, jobs=1): err= 0: pid=925822: Mon Jul 15 09:41:32 2024 00:32:47.311 read: IOPS=491, BW=1968KiB/s (2015kB/s)(19.3MiB/10054msec) 00:32:47.311 slat (usec): min=5, max=101, avg=21.70, stdev=16.13 00:32:47.311 clat (usec): min=16932, max=74796, avg=32280.57, stdev=2900.24 00:32:47.311 lat (usec): min=16940, max=74802, avg=32302.26, stdev=2900.06 00:32:47.311 clat percentiles (usec): 00:32:47.311 | 1.00th=[22676], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:32:47.311 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.311 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:32:47.311 | 99.00th=[41157], 99.50th=[53216], 99.90th=[74974], 99.95th=[74974], 00:32:47.311 | 99.99th=[74974] 00:32:47.311 bw ( KiB/s): min= 1792, max= 2144, per=4.15%, avg=1973.60, stdev=82.78, samples=20 00:32:47.311 iops : min= 448, max= 536, avg=493.40, stdev=20.69, samples=20 00:32:47.311 lat (msec) : 20=0.20%, 50=99.23%, 100=0.57% 00:32:47.311 cpu : usr=99.23%, sys=0.49%, ctx=10, majf=0, minf=40 00:32:47.311 IO depths : 1=5.9%, 2=11.8%, 4=24.0%, 8=51.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:32:47.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.311 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.311 issued rwts: total=4946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.311 filename1: (groupid=0, jobs=1): err= 0: pid=925823: Mon Jul 15 09:41:32 2024 00:32:47.311 read: IOPS=494, BW=1976KiB/s (2023kB/s)(19.3MiB/10008msec) 00:32:47.311 slat (usec): min=5, max=117, avg=22.34, stdev=17.40 00:32:47.311 clat (usec): min=9354, max=53037, avg=32171.58, stdev=2191.16 00:32:47.311 lat (usec): min=9360, max=53054, avg=32193.92, stdev=2191.28 00:32:47.311 clat percentiles (usec): 00:32:47.311 | 1.00th=[30278], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:32:47.311 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.311 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:32:47.311 | 99.00th=[34341], 99.50th=[34866], 99.90th=[53216], 99.95th=[53216], 00:32:47.311 | 99.99th=[53216] 00:32:47.311 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1960.42, stdev=74.55, samples=19 00:32:47.311 iops : min= 448, max= 512, avg=490.11, stdev=18.64, samples=19 00:32:47.311 lat (msec) : 10=0.32%, 20=0.32%, 50=99.03%, 100=0.32% 00:32:47.311 cpu : usr=98.33%, sys=0.97%, ctx=413, majf=0, minf=61 00:32:47.311 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:47.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.311 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.311 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.311 filename1: (groupid=0, jobs=1): err= 0: pid=925824: Mon Jul 15 09:41:32 2024 00:32:47.311 read: IOPS=494, BW=1979KiB/s (2027kB/s)(19.3MiB/10007msec) 00:32:47.311 slat (nsec): min=5453, max=91982, avg=14752.87, stdev=13787.03 00:32:47.311 clat (usec): min=9364, max=61071, avg=32260.42, stdev=4141.99 00:32:47.311 lat (usec): min=9370, max=61091, avg=32275.17, stdev=4142.29 00:32:47.311 clat percentiles (usec): 00:32:47.311 | 1.00th=[20055], 5.00th=[25822], 10.00th=[29492], 20.00th=[31851], 00:32:47.311 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.311 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[36963], 00:32:47.311 | 99.00th=[51119], 99.50th=[54264], 99.90th=[61080], 99.95th=[61080], 00:32:47.311 | 99.99th=[61080] 00:32:47.311 bw ( KiB/s): min= 1763, max= 2048, per=4.15%, avg=1970.68, stdev=62.84, samples=19 00:32:47.311 iops : min= 440, max= 512, avg=492.63, stdev=15.85, samples=19 00:32:47.312 lat (msec) : 10=0.04%, 20=0.83%, 50=97.76%, 100=1.37% 00:32:47.312 cpu : usr=99.03%, sys=0.71%, ctx=10, majf=0, minf=67 00:32:47.312 IO depths : 1=0.7%, 2=2.0%, 4=5.7%, 8=75.7%, 16=15.9%, 32=0.0%, >=64=0.0% 00:32:47.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 complete : 0=0.0%, 4=90.2%, 8=8.1%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 issued rwts: total=4952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.312 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.312 filename1: (groupid=0, jobs=1): err= 0: pid=925825: Mon Jul 15 09:41:32 2024 00:32:47.312 read: IOPS=493, BW=1974KiB/s (2021kB/s)(19.3MiB/10019msec) 00:32:47.312 slat (nsec): min=5591, max=95675, avg=17910.31, stdev=16596.27 00:32:47.312 clat (usec): min=17899, max=40651, avg=32271.44, stdev=1247.96 00:32:47.312 lat (usec): min=17912, max=40658, avg=32289.36, stdev=1246.28 00:32:47.312 clat percentiles (usec): 00:32:47.312 | 1.00th=[30540], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:32:47.312 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.312 | 70.00th=[32637], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:32:47.312 | 99.00th=[34866], 99.50th=[35914], 99.90th=[40633], 99.95th=[40633], 00:32:47.312 | 99.99th=[40633] 00:32:47.312 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=1971.20, stdev=64.34, samples=20 00:32:47.312 iops : min= 480, max= 512, avg=492.80, stdev=16.08, samples=20 00:32:47.312 lat (msec) : 20=0.32%, 50=99.68% 00:32:47.312 cpu : usr=99.02%, sys=0.64%, ctx=62, majf=0, minf=38 00:32:47.312 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:47.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.312 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.312 filename1: (groupid=0, jobs=1): err= 0: pid=925827: Mon Jul 15 09:41:32 2024 00:32:47.312 read: IOPS=495, BW=1981KiB/s (2029kB/s)(19.4MiB/10013msec) 00:32:47.312 slat (usec): min=5, max=101, avg=15.08, stdev=11.63 00:32:47.312 clat (usec): min=16938, max=35972, avg=32178.76, stdev=1553.46 00:32:47.312 lat (usec): min=16947, max=35989, avg=32193.84, stdev=1553.62 00:32:47.312 clat percentiles (usec): 00:32:47.312 | 1.00th=[24511], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:32:47.312 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.312 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:32:47.312 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:32:47.312 | 99.99th=[35914] 00:32:47.312 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1977.60, stdev=65.33, samples=20 00:32:47.312 iops : min= 480, max= 512, avg=494.40, stdev=16.33, samples=20 00:32:47.312 lat (msec) : 20=0.65%, 50=99.35% 00:32:47.312 cpu : usr=99.18%, sys=0.49%, ctx=71, majf=0, minf=65 00:32:47.312 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:47.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.312 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.312 filename1: (groupid=0, jobs=1): err= 0: pid=925828: Mon Jul 15 09:41:32 2024 00:32:47.312 read: IOPS=493, BW=1975KiB/s (2023kB/s)(19.3MiB/10007msec) 00:32:47.312 slat (nsec): min=5614, max=99010, avg=21194.73, stdev=14021.21 00:32:47.312 clat (usec): min=9262, max=52304, avg=32200.10, stdev=2201.81 00:32:47.312 lat (usec): min=9269, max=52321, avg=32221.29, stdev=2201.96 00:32:47.312 clat percentiles (usec): 00:32:47.312 | 1.00th=[25297], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:32:47.312 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.312 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:32:47.312 | 99.00th=[34866], 99.50th=[38536], 99.90th=[52167], 99.95th=[52167], 00:32:47.312 | 99.99th=[52167] 00:32:47.312 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1960.42, stdev=74.55, samples=19 00:32:47.312 iops : min= 448, max= 512, avg=490.11, stdev=18.64, samples=19 00:32:47.312 lat (msec) : 10=0.28%, 20=0.36%, 50=99.03%, 100=0.32% 00:32:47.312 cpu : usr=99.31%, sys=0.43%, ctx=8, majf=0, minf=50 00:32:47.312 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:32:47.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 issued rwts: total=4942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.312 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.312 filename1: (groupid=0, jobs=1): err= 0: pid=925829: Mon Jul 15 09:41:32 2024 00:32:47.312 read: IOPS=517, BW=2071KiB/s (2121kB/s)(20.3MiB/10019msec) 00:32:47.312 slat (nsec): min=5569, max=53519, avg=9963.03, stdev=6585.09 00:32:47.312 clat (usec): min=13492, max=59739, avg=30835.23, stdev=5579.15 00:32:47.312 lat (usec): min=13499, max=59746, avg=30845.20, stdev=5580.63 00:32:47.312 clat percentiles (usec): 00:32:47.312 | 1.00th=[18220], 5.00th=[21365], 10.00th=[23200], 20.00th=[26084], 00:32:47.312 | 30.00th=[31327], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:32:47.312 | 70.00th=[32375], 80.00th=[32637], 90.00th=[34341], 95.00th=[39060], 00:32:47.312 | 99.00th=[50594], 99.50th=[52691], 99.90th=[59507], 99.95th=[59507], 00:32:47.312 | 99.99th=[59507] 00:32:47.312 bw ( KiB/s): min= 1840, max= 2416, per=4.35%, avg=2068.80, stdev=129.79, samples=20 00:32:47.312 iops : min= 460, max= 604, avg=517.20, stdev=32.45, samples=20 00:32:47.312 lat (msec) : 20=2.56%, 50=96.32%, 100=1.12% 00:32:47.312 cpu : usr=98.89%, sys=0.83%, ctx=15, majf=0, minf=96 00:32:47.312 IO depths : 1=1.2%, 2=3.6%, 4=12.7%, 8=70.2%, 16=12.3%, 32=0.0%, >=64=0.0% 00:32:47.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 complete : 0=0.0%, 4=91.1%, 8=4.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 issued rwts: total=5188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.312 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.312 filename1: (groupid=0, jobs=1): err= 0: pid=925830: Mon Jul 15 09:41:32 2024 00:32:47.312 read: IOPS=495, BW=1983KiB/s (2030kB/s)(19.4MiB/10010msec) 00:32:47.312 slat (nsec): min=5420, max=64440, avg=10548.07, stdev=6982.02 00:32:47.312 clat (usec): min=9619, max=49506, avg=32173.34, stdev=2219.06 00:32:47.312 lat (usec): min=9625, max=49522, avg=32183.89, stdev=2219.00 00:32:47.312 clat percentiles (usec): 00:32:47.312 | 1.00th=[21890], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:32:47.312 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.312 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:32:47.312 | 99.00th=[35390], 99.50th=[38536], 99.90th=[49546], 99.95th=[49546], 00:32:47.312 | 99.99th=[49546] 00:32:47.312 bw ( KiB/s): min= 1888, max= 2096, per=4.16%, avg=1974.74, stdev=70.62, samples=19 00:32:47.312 iops : min= 472, max= 524, avg=493.68, stdev=17.65, samples=19 00:32:47.312 lat (msec) : 10=0.12%, 20=0.52%, 50=99.36% 00:32:47.312 cpu : usr=98.95%, sys=0.79%, ctx=10, majf=0, minf=56 00:32:47.312 IO depths : 1=5.9%, 2=11.9%, 4=24.2%, 8=51.3%, 16=6.7%, 32=0.0%, >=64=0.0% 00:32:47.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 issued rwts: total=4962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.312 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.312 filename1: (groupid=0, jobs=1): err= 0: pid=925831: Mon Jul 15 09:41:32 2024 00:32:47.312 read: IOPS=490, BW=1963KiB/s (2011kB/s)(19.2MiB/10007msec) 00:32:47.312 slat (nsec): min=5666, max=81194, avg=19456.33, stdev=12418.51 00:32:47.312 clat (usec): min=15331, max=56104, avg=32406.15, stdev=1909.37 00:32:47.312 lat (usec): min=15337, max=56133, avg=32425.61, stdev=1908.70 00:32:47.312 clat percentiles (usec): 00:32:47.312 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:32:47.312 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.312 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:32:47.312 | 99.00th=[39060], 99.50th=[42730], 99.90th=[55837], 99.95th=[55837], 00:32:47.312 | 99.99th=[56361] 00:32:47.312 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1953.68, stdev=71.93, samples=19 00:32:47.312 iops : min= 448, max= 512, avg=488.42, stdev=17.98, samples=19 00:32:47.312 lat (msec) : 20=0.33%, 50=99.35%, 100=0.33% 00:32:47.312 cpu : usr=99.19%, sys=0.54%, ctx=10, majf=0, minf=63 00:32:47.312 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:47.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.312 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.312 filename2: (groupid=0, jobs=1): err= 0: pid=925832: Mon Jul 15 09:41:32 2024 00:32:47.312 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10008msec) 00:32:47.312 slat (nsec): min=5579, max=92662, avg=15675.61, stdev=11878.82 00:32:47.312 clat (usec): min=9355, max=71206, avg=32252.28, stdev=3896.97 00:32:47.312 lat (usec): min=9361, max=71231, avg=32267.96, stdev=3896.86 00:32:47.312 clat percentiles (usec): 00:32:47.312 | 1.00th=[20841], 5.00th=[25560], 10.00th=[28967], 20.00th=[31851], 00:32:47.312 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.312 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[38011], 00:32:47.312 | 99.00th=[48497], 99.50th=[50070], 99.90th=[52691], 99.95th=[52691], 00:32:47.312 | 99.99th=[70779] 00:32:47.312 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1964.63, stdev=74.20, samples=19 00:32:47.312 iops : min= 448, max= 512, avg=491.16, stdev=18.55, samples=19 00:32:47.312 lat (msec) : 10=0.12%, 20=0.75%, 50=98.60%, 100=0.53% 00:32:47.312 cpu : usr=99.15%, sys=0.57%, ctx=16, majf=0, minf=93 00:32:47.312 IO depths : 1=3.1%, 2=6.5%, 4=15.1%, 8=64.2%, 16=11.1%, 32=0.0%, >=64=0.0% 00:32:47.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 complete : 0=0.0%, 4=91.8%, 8=4.1%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 issued rwts: total=4946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.312 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.312 filename2: (groupid=0, jobs=1): err= 0: pid=925833: Mon Jul 15 09:41:32 2024 00:32:47.312 read: IOPS=515, BW=2064KiB/s (2113kB/s)(20.2MiB/10010msec) 00:32:47.312 slat (nsec): min=5571, max=60777, avg=8917.79, stdev=4917.81 00:32:47.312 clat (usec): min=2857, max=41345, avg=30934.91, stdev=4590.57 00:32:47.312 lat (usec): min=2875, max=41352, avg=30943.83, stdev=4590.20 00:32:47.312 clat percentiles (usec): 00:32:47.312 | 1.00th=[ 4621], 5.00th=[21103], 10.00th=[24249], 20.00th=[31851], 00:32:47.312 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.312 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:32:47.312 | 99.00th=[33817], 99.50th=[34341], 99.90th=[39060], 99.95th=[39584], 00:32:47.312 | 99.99th=[41157] 00:32:47.312 bw ( KiB/s): min= 1920, max= 2864, per=4.33%, avg=2059.20, stdev=243.27, samples=20 00:32:47.312 iops : min= 480, max= 716, avg=514.80, stdev=60.82, samples=20 00:32:47.312 lat (msec) : 4=0.37%, 10=1.28%, 20=1.18%, 50=97.17% 00:32:47.312 cpu : usr=99.14%, sys=0.57%, ctx=56, majf=0, minf=52 00:32:47.312 IO depths : 1=5.7%, 2=11.5%, 4=23.5%, 8=52.5%, 16=6.9%, 32=0.0%, >=64=0.0% 00:32:47.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 issued rwts: total=5164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.312 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.312 filename2: (groupid=0, jobs=1): err= 0: pid=925834: Mon Jul 15 09:41:32 2024 00:32:47.312 read: IOPS=492, BW=1970KiB/s (2018kB/s)(19.2MiB/10004msec) 00:32:47.312 slat (nsec): min=5601, max=57074, avg=12631.92, stdev=9084.51 00:32:47.312 clat (usec): min=19331, max=52794, avg=32372.98, stdev=1537.44 00:32:47.312 lat (usec): min=19338, max=52819, avg=32385.61, stdev=1537.71 00:32:47.312 clat percentiles (usec): 00:32:47.312 | 1.00th=[28967], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:32:47.312 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.312 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:32:47.312 | 99.00th=[34341], 99.50th=[34341], 99.90th=[52691], 99.95th=[52691], 00:32:47.312 | 99.99th=[52691] 00:32:47.312 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1967.16, stdev=76.45, samples=19 00:32:47.312 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:32:47.312 lat (msec) : 20=0.32%, 50=99.35%, 100=0.32% 00:32:47.312 cpu : usr=99.22%, sys=0.49%, ctx=50, majf=0, minf=70 00:32:47.312 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:47.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.312 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.312 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.312 filename2: (groupid=0, jobs=1): err= 0: pid=925835: Mon Jul 15 09:41:32 2024 00:32:47.312 read: IOPS=494, BW=1976KiB/s (2023kB/s)(19.3MiB/10008msec) 00:32:47.312 slat (nsec): min=5612, max=92536, avg=22402.43, stdev=14915.41 00:32:47.312 clat (usec): min=9349, max=52699, avg=32169.83, stdev=2168.95 00:32:47.312 lat (usec): min=9355, max=52718, avg=32192.24, stdev=2169.30 00:32:47.312 clat percentiles (usec): 00:32:47.312 | 1.00th=[30278], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:32:47.312 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:32:47.312 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:32:47.312 | 99.00th=[34341], 99.50th=[34866], 99.90th=[52691], 99.95th=[52691], 00:32:47.312 | 99.99th=[52691] 00:32:47.312 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1960.42, stdev=74.55, samples=19 00:32:47.312 iops : min= 448, max= 512, avg=490.11, stdev=18.64, samples=19 00:32:47.312 lat (msec) : 10=0.32%, 20=0.32%, 50=99.03%, 100=0.32% 00:32:47.312 cpu : usr=98.44%, sys=0.94%, ctx=118, majf=0, minf=45 00:32:47.313 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:47.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.313 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.313 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.313 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.313 filename2: (groupid=0, jobs=1): err= 0: pid=925836: Mon Jul 15 09:41:32 2024 00:32:47.313 read: IOPS=495, BW=1981KiB/s (2029kB/s)(19.4MiB/10013msec) 00:32:47.313 slat (nsec): min=5587, max=77174, avg=12053.13, stdev=8712.06 00:32:47.313 clat (usec): min=17648, max=35914, avg=32201.20, stdev=1556.06 00:32:47.313 lat (usec): min=17654, max=35931, avg=32213.25, stdev=1556.15 00:32:47.313 clat percentiles (usec): 00:32:47.313 | 1.00th=[24773], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:32:47.313 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.313 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:32:47.313 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:32:47.313 | 99.99th=[35914] 00:32:47.313 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1977.60, stdev=65.33, samples=20 00:32:47.313 iops : min= 480, max= 512, avg=494.40, stdev=16.33, samples=20 00:32:47.313 lat (msec) : 20=0.65%, 50=99.35% 00:32:47.313 cpu : usr=98.65%, sys=0.79%, ctx=45, majf=0, minf=67 00:32:47.313 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:47.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.313 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.313 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.313 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.313 filename2: (groupid=0, jobs=1): err= 0: pid=925837: Mon Jul 15 09:41:32 2024 00:32:47.313 read: IOPS=492, BW=1969KiB/s (2017kB/s)(19.2MiB/10009msec) 00:32:47.313 slat (usec): min=5, max=102, avg=19.36, stdev=16.42 00:32:47.313 clat (usec): min=22477, max=62283, avg=32331.27, stdev=1407.62 00:32:47.313 lat (usec): min=22483, max=62316, avg=32350.62, stdev=1406.63 00:32:47.313 clat percentiles (usec): 00:32:47.313 | 1.00th=[30540], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:32:47.313 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.313 | 70.00th=[32637], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:32:47.313 | 99.00th=[34866], 99.50th=[40633], 99.90th=[45351], 99.95th=[45351], 00:32:47.313 | 99.99th=[62129] 00:32:47.313 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1967.16, stdev=76.45, samples=19 00:32:47.313 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:32:47.313 lat (msec) : 50=99.96%, 100=0.04% 00:32:47.313 cpu : usr=98.45%, sys=1.01%, ctx=53, majf=0, minf=66 00:32:47.313 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:47.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.313 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.313 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.313 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.313 filename2: (groupid=0, jobs=1): err= 0: pid=925838: Mon Jul 15 09:41:32 2024 00:32:47.313 read: IOPS=498, BW=1994KiB/s (2042kB/s)(19.5MiB/10005msec) 00:32:47.313 slat (nsec): min=5567, max=88344, avg=11258.99, stdev=8821.49 00:32:47.313 clat (usec): min=11507, max=56772, avg=32020.76, stdev=4760.47 00:32:47.313 lat (usec): min=11514, max=56781, avg=32032.02, stdev=4760.43 00:32:47.313 clat percentiles (usec): 00:32:47.313 | 1.00th=[17957], 5.00th=[24249], 10.00th=[26084], 20.00th=[31327], 00:32:47.313 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.313 | 70.00th=[32637], 80.00th=[33162], 90.00th=[35914], 95.00th=[39584], 00:32:47.313 | 99.00th=[50070], 99.50th=[51643], 99.90th=[54789], 99.95th=[56886], 00:32:47.313 | 99.99th=[56886] 00:32:47.313 bw ( KiB/s): min= 1888, max= 2112, per=4.18%, avg=1986.53, stdev=61.10, samples=19 00:32:47.313 iops : min= 472, max= 528, avg=496.63, stdev=15.28, samples=19 00:32:47.313 lat (msec) : 20=1.70%, 50=97.31%, 100=0.98% 00:32:47.313 cpu : usr=97.28%, sys=1.48%, ctx=339, majf=0, minf=42 00:32:47.313 IO depths : 1=1.6%, 2=3.5%, 4=9.2%, 8=72.3%, 16=13.3%, 32=0.0%, >=64=0.0% 00:32:47.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.313 complete : 0=0.0%, 4=90.4%, 8=6.4%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.313 issued rwts: total=4988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.313 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.313 filename2: (groupid=0, jobs=1): err= 0: pid=925839: Mon Jul 15 09:41:32 2024 00:32:47.313 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10004msec) 00:32:47.313 slat (usec): min=5, max=120, avg=16.17, stdev=15.83 00:32:47.313 clat (usec): min=17857, max=36004, avg=32248.46, stdev=1347.06 00:32:47.313 lat (usec): min=17890, max=36021, avg=32264.63, stdev=1345.87 00:32:47.313 clat percentiles (usec): 00:32:47.313 | 1.00th=[28705], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:32:47.313 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:47.313 | 70.00th=[32637], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:32:47.313 | 99.00th=[34341], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:32:47.313 | 99.99th=[35914] 00:32:47.313 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=1973.89, stdev=64.93, samples=19 00:32:47.313 iops : min= 480, max= 512, avg=493.47, stdev=16.23, samples=19 00:32:47.313 lat (msec) : 20=0.32%, 50=99.68% 00:32:47.313 cpu : usr=99.14%, sys=0.57%, ctx=47, majf=0, minf=68 00:32:47.313 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:47.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.313 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.313 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.313 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:47.313 00:32:47.313 Run status group 0 (all jobs): 00:32:47.313 READ: bw=46.4MiB/s (48.6MB/s), 1963KiB/s-2071KiB/s (2011kB/s-2121kB/s), io=466MiB (489MB), run=10003-10054msec 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:47.313 bdev_null0 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:47.313 [2024-07-15 09:41:33.236854] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:47.313 bdev_null1 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:47.313 09:41:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:47.313 { 00:32:47.313 "params": { 00:32:47.313 "name": "Nvme$subsystem", 00:32:47.313 "trtype": "$TEST_TRANSPORT", 00:32:47.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:47.313 "adrfam": "ipv4", 00:32:47.313 "trsvcid": "$NVMF_PORT", 00:32:47.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:47.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:47.314 "hdgst": ${hdgst:-false}, 00:32:47.314 "ddgst": ${ddgst:-false} 00:32:47.314 }, 00:32:47.314 "method": "bdev_nvme_attach_controller" 00:32:47.314 } 00:32:47.314 EOF 00:32:47.314 )") 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:47.314 { 00:32:47.314 "params": { 00:32:47.314 "name": "Nvme$subsystem", 00:32:47.314 "trtype": "$TEST_TRANSPORT", 00:32:47.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:47.314 "adrfam": "ipv4", 00:32:47.314 "trsvcid": "$NVMF_PORT", 00:32:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:47.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:47.314 "hdgst": ${hdgst:-false}, 00:32:47.314 "ddgst": ${ddgst:-false} 00:32:47.314 }, 00:32:47.314 "method": "bdev_nvme_attach_controller" 00:32:47.314 } 00:32:47.314 EOF 00:32:47.314 )") 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:47.314 "params": { 00:32:47.314 "name": "Nvme0", 00:32:47.314 "trtype": "tcp", 00:32:47.314 "traddr": "10.0.0.2", 00:32:47.314 "adrfam": "ipv4", 00:32:47.314 "trsvcid": "4420", 00:32:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:47.314 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:47.314 "hdgst": false, 00:32:47.314 "ddgst": false 00:32:47.314 }, 00:32:47.314 "method": "bdev_nvme_attach_controller" 00:32:47.314 },{ 00:32:47.314 "params": { 00:32:47.314 "name": "Nvme1", 00:32:47.314 "trtype": "tcp", 00:32:47.314 "traddr": "10.0.0.2", 00:32:47.314 "adrfam": "ipv4", 00:32:47.314 "trsvcid": "4420", 00:32:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:47.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:47.314 "hdgst": false, 00:32:47.314 "ddgst": false 00:32:47.314 }, 00:32:47.314 "method": "bdev_nvme_attach_controller" 00:32:47.314 }' 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:47.314 09:41:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:47.314 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:47.314 ... 00:32:47.314 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:47.314 ... 00:32:47.314 fio-3.35 00:32:47.314 Starting 4 threads 00:32:47.314 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.612 00:32:52.612 filename0: (groupid=0, jobs=1): err= 0: pid=928300: Mon Jul 15 09:41:39 2024 00:32:52.612 read: IOPS=2079, BW=16.2MiB/s (17.0MB/s)(81.3MiB/5002msec) 00:32:52.612 slat (nsec): min=5411, max=29558, avg=6129.32, stdev=2028.42 00:32:52.612 clat (usec): min=1128, max=6742, avg=3830.08, stdev=683.36 00:32:52.612 lat (usec): min=1133, max=6748, avg=3836.21, stdev=683.22 00:32:52.612 clat percentiles (usec): 00:32:52.612 | 1.00th=[ 2474], 5.00th=[ 2933], 10.00th=[ 3195], 20.00th=[ 3359], 00:32:52.612 | 30.00th=[ 3458], 40.00th=[ 3556], 50.00th=[ 3654], 60.00th=[ 3785], 00:32:52.612 | 70.00th=[ 3982], 80.00th=[ 4359], 90.00th=[ 4883], 95.00th=[ 5211], 00:32:52.612 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 6390], 99.95th=[ 6652], 00:32:52.612 | 99.99th=[ 6718] 00:32:52.612 bw ( KiB/s): min=15104, max=17232, per=24.86%, avg=16661.33, stdev=679.25, samples=9 00:32:52.612 iops : min= 1888, max= 2154, avg=2082.67, stdev=84.91, samples=9 00:32:52.612 lat (msec) : 2=0.22%, 4=70.74%, 10=29.04% 00:32:52.612 cpu : usr=97.24%, sys=2.52%, ctx=11, majf=0, minf=46 00:32:52.612 IO depths : 1=0.4%, 2=1.2%, 4=69.6%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:52.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.612 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.612 issued rwts: total=10402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.612 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:52.612 filename0: (groupid=0, jobs=1): err= 0: pid=928301: Mon Jul 15 09:41:39 2024 00:32:52.612 read: IOPS=2108, BW=16.5MiB/s (17.3MB/s)(82.4MiB/5003msec) 00:32:52.612 slat (nsec): min=5406, max=28308, avg=6037.65, stdev=1848.90 00:32:52.612 clat (usec): min=1927, max=43912, avg=3776.55, stdev=1299.56 00:32:52.612 lat (usec): min=1933, max=43940, avg=3782.58, stdev=1299.67 00:32:52.612 clat percentiles (usec): 00:32:52.612 | 1.00th=[ 2606], 5.00th=[ 2966], 10.00th=[ 3097], 20.00th=[ 3294], 00:32:52.612 | 30.00th=[ 3425], 40.00th=[ 3490], 50.00th=[ 3556], 60.00th=[ 3654], 00:32:52.612 | 70.00th=[ 3785], 80.00th=[ 4080], 90.00th=[ 5014], 95.00th=[ 5211], 00:32:52.612 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 6718], 99.95th=[43779], 00:32:52.612 | 99.99th=[43779] 00:32:52.612 bw ( KiB/s): min=15472, max=17440, per=25.15%, avg=16851.56, stdev=548.79, samples=9 00:32:52.612 iops : min= 1934, max= 2180, avg=2106.44, stdev=68.60, samples=9 00:32:52.612 lat (msec) : 2=0.02%, 4=78.71%, 10=21.20%, 50=0.08% 00:32:52.612 cpu : usr=97.30%, sys=2.46%, ctx=6, majf=0, minf=26 00:32:52.612 IO depths : 1=0.4%, 2=1.1%, 4=71.1%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:52.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.612 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.612 issued rwts: total=10549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.612 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:52.612 filename1: (groupid=0, jobs=1): err= 0: pid=928302: Mon Jul 15 09:41:39 2024 00:32:52.612 read: IOPS=2114, BW=16.5MiB/s (17.3MB/s)(82.6MiB/5002msec) 00:32:52.612 slat (nsec): min=5406, max=57417, avg=6122.49, stdev=2164.29 00:32:52.612 clat (usec): min=1522, max=6735, avg=3766.01, stdev=681.65 00:32:52.612 lat (usec): min=1539, max=6741, avg=3772.13, stdev=681.49 00:32:52.612 clat percentiles (usec): 00:32:52.612 | 1.00th=[ 2606], 5.00th=[ 3032], 10.00th=[ 3195], 20.00th=[ 3326], 00:32:52.612 | 30.00th=[ 3425], 40.00th=[ 3490], 50.00th=[ 3589], 60.00th=[ 3687], 00:32:52.612 | 70.00th=[ 3785], 80.00th=[ 4015], 90.00th=[ 5145], 95.00th=[ 5211], 00:32:52.612 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 6325], 99.95th=[ 6652], 00:32:52.612 | 99.99th=[ 6718] 00:32:52.612 bw ( KiB/s): min=16480, max=17248, per=25.27%, avg=16935.33, stdev=236.14, samples=9 00:32:52.612 iops : min= 2060, max= 2156, avg=2116.89, stdev=29.48, samples=9 00:32:52.612 lat (msec) : 2=0.09%, 4=79.59%, 10=20.32% 00:32:52.612 cpu : usr=97.36%, sys=2.40%, ctx=12, majf=0, minf=69 00:32:52.612 IO depths : 1=0.3%, 2=0.6%, 4=71.6%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:52.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.612 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.612 issued rwts: total=10577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.612 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:52.612 filename1: (groupid=0, jobs=1): err= 0: pid=928303: Mon Jul 15 09:41:39 2024 00:32:52.612 read: IOPS=2074, BW=16.2MiB/s (17.0MB/s)(81.1MiB/5002msec) 00:32:52.612 slat (nsec): min=5407, max=34669, avg=5997.28, stdev=1783.71 00:32:52.612 clat (usec): min=1100, max=7068, avg=3839.37, stdev=720.90 00:32:52.612 lat (usec): min=1106, max=7073, avg=3845.36, stdev=720.78 00:32:52.612 clat percentiles (usec): 00:32:52.612 | 1.00th=[ 2704], 5.00th=[ 3097], 10.00th=[ 3195], 20.00th=[ 3392], 00:32:52.612 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3621], 60.00th=[ 3720], 00:32:52.612 | 70.00th=[ 3785], 80.00th=[ 4146], 90.00th=[ 5211], 95.00th=[ 5407], 00:32:52.612 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 6718], 99.95th=[ 6915], 00:32:52.612 | 99.99th=[ 7046] 00:32:52.612 bw ( KiB/s): min=16288, max=17360, per=24.75%, avg=16583.22, stdev=316.77, samples=9 00:32:52.612 iops : min= 2036, max= 2170, avg=2072.89, stdev=39.60, samples=9 00:32:52.612 lat (msec) : 2=0.05%, 4=76.23%, 10=23.72% 00:32:52.612 cpu : usr=97.10%, sys=2.66%, ctx=7, majf=0, minf=49 00:32:52.612 IO depths : 1=0.4%, 2=0.9%, 4=71.4%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:52.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.612 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.612 issued rwts: total=10376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.612 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:52.612 00:32:52.612 Run status group 0 (all jobs): 00:32:52.612 READ: bw=65.4MiB/s (68.6MB/s), 16.2MiB/s-16.5MiB/s (17.0MB/s-17.3MB/s), io=327MiB (343MB), run=5002-5003msec 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.612 00:32:52.612 real 0m24.616s 00:32:52.612 user 5m14.937s 00:32:52.612 sys 0m4.034s 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:52.612 09:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.612 ************************************ 00:32:52.612 END TEST fio_dif_rand_params 00:32:52.612 ************************************ 00:32:52.613 09:41:39 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:32:52.613 09:41:39 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:52.613 09:41:39 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:52.613 09:41:39 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:52.613 09:41:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:52.613 ************************************ 00:32:52.613 START TEST fio_dif_digest 00:32:52.613 ************************************ 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:52.613 bdev_null0 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:52.613 [2024-07-15 09:41:39.754619] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:52.613 { 00:32:52.613 "params": { 00:32:52.613 "name": "Nvme$subsystem", 00:32:52.613 "trtype": "$TEST_TRANSPORT", 00:32:52.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:52.613 "adrfam": "ipv4", 00:32:52.613 "trsvcid": "$NVMF_PORT", 00:32:52.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:52.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:52.613 "hdgst": ${hdgst:-false}, 00:32:52.613 "ddgst": ${ddgst:-false} 00:32:52.613 }, 00:32:52.613 "method": "bdev_nvme_attach_controller" 00:32:52.613 } 00:32:52.613 EOF 00:32:52.613 )") 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:52.613 "params": { 00:32:52.613 "name": "Nvme0", 00:32:52.613 "trtype": "tcp", 00:32:52.613 "traddr": "10.0.0.2", 00:32:52.613 "adrfam": "ipv4", 00:32:52.613 "trsvcid": "4420", 00:32:52.613 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:52.613 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:52.613 "hdgst": true, 00:32:52.613 "ddgst": true 00:32:52.613 }, 00:32:52.613 "method": "bdev_nvme_attach_controller" 00:32:52.613 }' 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:52.613 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:52.906 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:52.906 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:52.906 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:52.906 09:41:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:53.173 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:53.173 ... 00:32:53.173 fio-3.35 00:32:53.173 Starting 3 threads 00:32:53.173 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.398 00:33:05.398 filename0: (groupid=0, jobs=1): err= 0: pid=929567: Mon Jul 15 09:41:50 2024 00:33:05.398 read: IOPS=221, BW=27.7MiB/s (29.1MB/s)(278MiB/10045msec) 00:33:05.398 slat (usec): min=5, max=117, avg= 6.95, stdev= 2.71 00:33:05.398 clat (usec): min=8796, max=47274, avg=13484.09, stdev=1337.34 00:33:05.398 lat (usec): min=8802, max=47283, avg=13491.04, stdev=1337.37 00:33:05.398 clat percentiles (usec): 00:33:05.398 | 1.00th=[10552], 5.00th=[11731], 10.00th=[12125], 20.00th=[12649], 00:33:05.398 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:33:05.398 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14877], 95.00th=[15401], 00:33:05.398 | 99.00th=[16188], 99.50th=[16712], 99.90th=[19006], 99.95th=[19268], 00:33:05.398 | 99.99th=[47449] 00:33:05.398 bw ( KiB/s): min=27648, max=29952, per=34.08%, avg=28492.80, stdev=599.51, samples=20 00:33:05.398 iops : min= 216, max= 234, avg=222.60, stdev= 4.68, samples=20 00:33:05.398 lat (msec) : 10=0.36%, 20=99.60%, 50=0.04% 00:33:05.398 cpu : usr=94.95%, sys=4.80%, ctx=29, majf=0, minf=141 00:33:05.398 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:05.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.398 issued rwts: total=2227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.398 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:05.398 filename0: (groupid=0, jobs=1): err= 0: pid=929568: Mon Jul 15 09:41:50 2024 00:33:05.398 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(266MiB/10046msec) 00:33:05.398 slat (nsec): min=5630, max=56664, avg=7194.92, stdev=1766.77 00:33:05.398 clat (usec): min=8228, max=54062, avg=14113.10, stdev=1700.99 00:33:05.398 lat (usec): min=8234, max=54070, avg=14120.30, stdev=1701.04 00:33:05.398 clat percentiles (usec): 00:33:05.398 | 1.00th=[10945], 5.00th=[11994], 10.00th=[12518], 20.00th=[13173], 00:33:05.398 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:33:05.398 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15664], 95.00th=[16057], 00:33:05.398 | 99.00th=[17171], 99.50th=[17695], 99.90th=[19268], 99.95th=[49021], 00:33:05.398 | 99.99th=[54264] 00:33:05.398 bw ( KiB/s): min=25856, max=28928, per=32.60%, avg=27251.20, stdev=711.95, samples=20 00:33:05.398 iops : min= 202, max= 226, avg=212.90, stdev= 5.56, samples=20 00:33:05.398 lat (msec) : 10=0.56%, 20=99.34%, 50=0.05%, 100=0.05% 00:33:05.398 cpu : usr=95.18%, sys=4.57%, ctx=19, majf=0, minf=210 00:33:05.398 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:05.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.398 issued rwts: total=2131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.398 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:05.398 filename0: (groupid=0, jobs=1): err= 0: pid=929569: Mon Jul 15 09:41:50 2024 00:33:05.398 read: IOPS=219, BW=27.4MiB/s (28.8MB/s)(276MiB/10048msec) 00:33:05.398 slat (nsec): min=5643, max=37659, avg=7337.46, stdev=1536.30 00:33:05.398 clat (usec): min=9914, max=55924, avg=13648.41, stdev=2150.21 00:33:05.398 lat (usec): min=9920, max=55930, avg=13655.74, stdev=2150.19 00:33:05.398 clat percentiles (usec): 00:33:05.398 | 1.00th=[11076], 5.00th=[11863], 10.00th=[12256], 20.00th=[12780], 00:33:05.398 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:33:05.398 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14877], 95.00th=[15270], 00:33:05.398 | 99.00th=[16188], 99.50th=[16712], 99.90th=[54264], 99.95th=[54789], 00:33:05.398 | 99.99th=[55837] 00:33:05.398 bw ( KiB/s): min=25856, max=28672, per=33.72%, avg=28185.60, stdev=648.17, samples=20 00:33:05.398 iops : min= 202, max= 224, avg=220.20, stdev= 5.06, samples=20 00:33:05.398 lat (msec) : 10=0.14%, 20=99.59%, 50=0.09%, 100=0.18% 00:33:05.398 cpu : usr=95.32%, sys=4.43%, ctx=33, majf=0, minf=102 00:33:05.398 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:05.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.398 issued rwts: total=2204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.398 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:05.398 00:33:05.398 Run status group 0 (all jobs): 00:33:05.398 READ: bw=81.6MiB/s (85.6MB/s), 26.5MiB/s-27.7MiB/s (27.8MB/s-29.1MB/s), io=820MiB (860MB), run=10045-10048msec 00:33:05.398 09:41:50 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:05.398 09:41:50 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:33:05.398 09:41:50 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:33:05.398 09:41:50 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:05.398 09:41:50 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:33:05.398 09:41:50 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:05.398 09:41:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.398 09:41:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:05.398 09:41:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.398 09:41:50 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:05.398 09:41:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.398 09:41:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:05.398 09:41:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.398 00:33:05.398 real 0m11.126s 00:33:05.398 user 0m40.627s 00:33:05.398 sys 0m1.718s 00:33:05.398 09:41:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:05.398 09:41:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:05.398 ************************************ 00:33:05.398 END TEST fio_dif_digest 00:33:05.398 ************************************ 00:33:05.398 09:41:50 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:33:05.398 09:41:50 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:05.398 09:41:50 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:33:05.398 09:41:50 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:05.398 09:41:50 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:33:05.398 09:41:50 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:05.398 09:41:50 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:33:05.398 09:41:50 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:05.398 09:41:50 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:05.399 rmmod nvme_tcp 00:33:05.399 rmmod nvme_fabrics 00:33:05.399 rmmod nvme_keyring 00:33:05.399 09:41:50 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:05.399 09:41:50 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:33:05.399 09:41:50 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:33:05.399 09:41:50 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 919186 ']' 00:33:05.399 09:41:50 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 919186 00:33:05.399 09:41:50 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 919186 ']' 00:33:05.399 09:41:50 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 919186 00:33:05.399 09:41:50 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:33:05.399 09:41:50 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:05.399 09:41:50 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 919186 00:33:05.399 09:41:51 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:05.399 09:41:51 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:05.399 09:41:51 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 919186' 00:33:05.399 killing process with pid 919186 00:33:05.399 09:41:51 nvmf_dif -- common/autotest_common.sh@967 -- # kill 919186 00:33:05.399 09:41:51 nvmf_dif -- common/autotest_common.sh@972 -- # wait 919186 00:33:05.399 09:41:51 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:05.399 09:41:51 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:07.310 Waiting for block devices as requested 00:33:07.310 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:07.570 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:07.570 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:07.570 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:07.831 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:07.831 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:07.831 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:08.092 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:08.092 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:08.092 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:08.353 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:08.353 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:08.353 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:08.353 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:08.613 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:08.613 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:08.613 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:08.613 09:41:55 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:08.613 09:41:55 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:08.613 09:41:55 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:08.613 09:41:55 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:08.613 09:41:55 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.614 09:41:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:08.614 09:41:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.153 09:41:57 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:11.153 00:33:11.153 real 1m18.676s 00:33:11.153 user 7m55.634s 00:33:11.153 sys 0m20.615s 00:33:11.153 09:41:57 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:11.153 09:41:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:11.153 ************************************ 00:33:11.153 END TEST nvmf_dif 00:33:11.153 ************************************ 00:33:11.153 09:41:57 -- common/autotest_common.sh@1142 -- # return 0 00:33:11.153 09:41:57 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:11.153 09:41:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:11.153 09:41:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:11.153 09:41:57 -- common/autotest_common.sh@10 -- # set +x 00:33:11.153 ************************************ 00:33:11.153 START TEST nvmf_abort_qd_sizes 00:33:11.153 ************************************ 00:33:11.153 09:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:11.153 * Looking for test storage... 00:33:11.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:11.153 09:41:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:11.153 09:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:11.153 09:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:33:11.154 09:41:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:19.291 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:19.291 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.291 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:19.292 Found net devices under 0000:31:00.0: cvl_0_0 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:19.292 Found net devices under 0000:31:00.1: cvl_0_1 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:19.292 09:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:19.292 09:42:06 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:19.292 09:42:06 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:19.292 09:42:06 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:19.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:19.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:33:19.292 00:33:19.292 --- 10.0.0.2 ping statistics --- 00:33:19.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.292 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:33:19.292 09:42:06 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:19.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:19.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:33:19.292 00:33:19.292 --- 10.0.0.1 ping statistics --- 00:33:19.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.292 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:33:19.292 09:42:06 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:19.292 09:42:06 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:33:19.292 09:42:06 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:33:19.292 09:42:06 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:22.593 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:22.593 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:22.593 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:22.593 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:22.593 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:22.593 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:22.854 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:22.854 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:22.854 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:22.854 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:22.854 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:22.854 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:22.854 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:22.854 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:22.854 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:22.854 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:22.854 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:22.854 09:42:10 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:22.854 09:42:10 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:22.855 09:42:10 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:22.855 09:42:10 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:22.855 09:42:10 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:22.855 09:42:10 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:23.115 09:42:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:23.115 09:42:10 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:23.115 09:42:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:23.115 09:42:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:23.115 09:42:10 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=940252 00:33:23.115 09:42:10 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 940252 00:33:23.115 09:42:10 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:23.115 09:42:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 940252 ']' 00:33:23.115 09:42:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.115 09:42:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:23.115 09:42:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.115 09:42:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:23.115 09:42:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:23.115 [2024-07-15 09:42:10.132222] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:33:23.115 [2024-07-15 09:42:10.132296] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:23.115 EAL: No free 2048 kB hugepages reported on node 1 00:33:23.115 [2024-07-15 09:42:10.212477] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:23.115 [2024-07-15 09:42:10.288692] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:23.115 [2024-07-15 09:42:10.288733] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:23.115 [2024-07-15 09:42:10.288740] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:23.115 [2024-07-15 09:42:10.288747] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:23.115 [2024-07-15 09:42:10.288757] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:23.115 [2024-07-15 09:42:10.288875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.115 [2024-07-15 09:42:10.288992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:23.115 [2024-07-15 09:42:10.289146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.115 [2024-07-15 09:42:10.289146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:24.056 09:42:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:24.057 09:42:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:24.057 09:42:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:24.057 ************************************ 00:33:24.057 START TEST spdk_target_abort 00:33:24.057 ************************************ 00:33:24.057 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:33:24.057 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:24.057 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:33:24.057 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.057 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:24.320 spdk_targetn1 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:24.320 [2024-07-15 09:42:11.324765] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:24.320 [2024-07-15 09:42:11.365028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:24.320 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:24.321 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:24.321 09:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:24.321 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.624 [2024-07-15 09:42:11.522188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:408 len:8 PRP1 0x2000078be000 PRP2 0x0 00:33:24.624 [2024-07-15 09:42:11.522223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0034 p:1 m:0 dnr:0 00:33:24.624 [2024-07-15 09:42:11.530212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:768 len:8 PRP1 0x2000078be000 PRP2 0x0 00:33:24.624 [2024-07-15 09:42:11.530241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0062 p:1 m:0 dnr:0 00:33:24.624 [2024-07-15 09:42:11.554639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2072 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:33:24.624 [2024-07-15 09:42:11.554659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:27.941 Initializing NVMe Controllers 00:33:27.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:27.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:27.941 Initialization complete. Launching workers. 00:33:27.941 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 18124, failed: 3 00:33:27.941 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3457, failed to submit 14670 00:33:27.941 success 662, unsuccess 2795, failed 0 00:33:27.941 09:42:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:27.941 09:42:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:27.941 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.941 [2024-07-15 09:42:14.708049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:216 len:8 PRP1 0x200007c50000 PRP2 0x0 00:33:27.941 [2024-07-15 09:42:14.708090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:33:27.941 [2024-07-15 09:42:14.724906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:608 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:33:27.941 [2024-07-15 09:42:14.724930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:33:27.941 [2024-07-15 09:42:14.796866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:2312 len:8 PRP1 0x200007c40000 PRP2 0x0 00:33:27.941 [2024-07-15 09:42:14.796892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:27.941 [2024-07-15 09:42:14.820881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:2944 len:8 PRP1 0x200007c52000 PRP2 0x0 00:33:27.941 [2024-07-15 09:42:14.820903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:28.881 [2024-07-15 09:42:15.888885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:27008 len:8 PRP1 0x200007c58000 PRP2 0x0 00:33:28.881 [2024-07-15 09:42:15.888920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:30.792 [2024-07-15 09:42:17.641006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:66976 len:8 PRP1 0x200007c48000 PRP2 0x0 00:33:30.792 [2024-07-15 09:42:17.641042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:00c3 p:1 m:0 dnr:0 00:33:30.792 Initializing NVMe Controllers 00:33:30.792 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:30.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:30.792 Initialization complete. Launching workers. 00:33:30.792 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8574, failed: 6 00:33:30.792 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1232, failed to submit 7348 00:33:30.792 success 352, unsuccess 880, failed 0 00:33:30.792 09:42:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:30.792 09:42:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:30.792 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.089 Initializing NVMe Controllers 00:33:34.089 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:34.089 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:34.089 Initialization complete. Launching workers. 00:33:34.089 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42542, failed: 0 00:33:34.089 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2612, failed to submit 39930 00:33:34.089 success 595, unsuccess 2017, failed 0 00:33:34.089 09:42:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:34.089 09:42:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.089 09:42:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:34.089 09:42:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.089 09:42:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:34.089 09:42:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.089 09:42:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:35.995 09:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.995 09:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 940252 00:33:35.995 09:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 940252 ']' 00:33:35.995 09:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 940252 00:33:35.995 09:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:33:35.995 09:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:35.995 09:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 940252 00:33:35.995 09:42:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:35.995 09:42:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:35.995 09:42:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 940252' 00:33:35.995 killing process with pid 940252 00:33:35.995 09:42:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 940252 00:33:35.995 09:42:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 940252 00:33:35.995 00:33:35.995 real 0m12.132s 00:33:35.995 user 0m49.473s 00:33:35.995 sys 0m1.688s 00:33:35.995 09:42:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:35.996 09:42:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:35.996 ************************************ 00:33:35.996 END TEST spdk_target_abort 00:33:35.996 ************************************ 00:33:35.996 09:42:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:33:35.996 09:42:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:35.996 09:42:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:35.996 09:42:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:35.996 09:42:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:36.255 ************************************ 00:33:36.255 START TEST kernel_target_abort 00:33:36.255 ************************************ 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:36.255 09:42:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:40.464 Waiting for block devices as requested 00:33:40.464 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:40.464 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:40.464 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:40.464 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:40.464 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:40.464 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:40.464 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:40.464 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:40.464 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:40.724 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:40.724 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:40.725 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:40.725 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:40.985 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:40.985 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:40.985 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:41.248 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:41.248 No valid GPT data, bailing 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:33:41.248 00:33:41.248 Discovery Log Number of Records 2, Generation counter 2 00:33:41.248 =====Discovery Log Entry 0====== 00:33:41.248 trtype: tcp 00:33:41.248 adrfam: ipv4 00:33:41.248 subtype: current discovery subsystem 00:33:41.248 treq: not specified, sq flow control disable supported 00:33:41.248 portid: 1 00:33:41.248 trsvcid: 4420 00:33:41.248 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:41.248 traddr: 10.0.0.1 00:33:41.248 eflags: none 00:33:41.248 sectype: none 00:33:41.248 =====Discovery Log Entry 1====== 00:33:41.248 trtype: tcp 00:33:41.248 adrfam: ipv4 00:33:41.248 subtype: nvme subsystem 00:33:41.248 treq: not specified, sq flow control disable supported 00:33:41.248 portid: 1 00:33:41.248 trsvcid: 4420 00:33:41.248 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:41.248 traddr: 10.0.0.1 00:33:41.248 eflags: none 00:33:41.248 sectype: none 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:41.248 09:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:41.248 EAL: No free 2048 kB hugepages reported on node 1 00:33:44.549 Initializing NVMe Controllers 00:33:44.549 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:44.549 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:44.549 Initialization complete. Launching workers. 00:33:44.549 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64559, failed: 0 00:33:44.549 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 64559, failed to submit 0 00:33:44.549 success 0, unsuccess 64559, failed 0 00:33:44.549 09:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:44.549 09:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:44.549 EAL: No free 2048 kB hugepages reported on node 1 00:33:47.848 Initializing NVMe Controllers 00:33:47.848 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:47.848 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:47.848 Initialization complete. Launching workers. 00:33:47.848 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 106016, failed: 0 00:33:47.848 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26682, failed to submit 79334 00:33:47.848 success 0, unsuccess 26682, failed 0 00:33:47.848 09:42:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:47.848 09:42:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:47.848 EAL: No free 2048 kB hugepages reported on node 1 00:33:50.393 Initializing NVMe Controllers 00:33:50.393 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:50.393 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:50.393 Initialization complete. Launching workers. 00:33:50.393 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100852, failed: 0 00:33:50.393 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25186, failed to submit 75666 00:33:50.393 success 0, unsuccess 25186, failed 0 00:33:50.393 09:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:50.393 09:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:50.393 09:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:33:50.393 09:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:50.393 09:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:50.393 09:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:50.393 09:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:50.393 09:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:50.393 09:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:50.654 09:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:54.861 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:54.861 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:54.861 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:54.861 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:54.861 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:54.861 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:54.861 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:54.861 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:54.861 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:54.861 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:54.861 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:54.861 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:54.861 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:54.861 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:54.861 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:54.861 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:56.249 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:56.249 00:33:56.249 real 0m20.033s 00:33:56.249 user 0m9.693s 00:33:56.249 sys 0m6.092s 00:33:56.249 09:42:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:56.249 09:42:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:56.249 ************************************ 00:33:56.249 END TEST kernel_target_abort 00:33:56.249 ************************************ 00:33:56.249 09:42:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:33:56.249 09:42:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:56.249 09:42:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:56.249 09:42:43 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:56.249 09:42:43 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:33:56.249 09:42:43 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:56.249 09:42:43 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:33:56.249 09:42:43 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:56.249 09:42:43 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:56.249 rmmod nvme_tcp 00:33:56.249 rmmod nvme_fabrics 00:33:56.249 rmmod nvme_keyring 00:33:56.249 09:42:43 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:56.249 09:42:43 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:33:56.249 09:42:43 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:33:56.249 09:42:43 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 940252 ']' 00:33:56.249 09:42:43 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 940252 00:33:56.249 09:42:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 940252 ']' 00:33:56.249 09:42:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 940252 00:33:56.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (940252) - No such process 00:33:56.249 09:42:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 940252 is not found' 00:33:56.249 Process with pid 940252 is not found 00:33:56.249 09:42:43 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:56.249 09:42:43 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:00.462 Waiting for block devices as requested 00:34:00.462 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:00.462 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:00.462 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:00.462 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:00.462 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:00.462 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:00.462 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:00.462 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:00.759 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:34:00.759 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:00.759 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:01.036 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:01.036 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:01.036 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:01.036 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:01.297 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:01.297 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:01.297 09:42:48 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:01.297 09:42:48 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:01.297 09:42:48 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:01.297 09:42:48 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:01.297 09:42:48 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.297 09:42:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:01.297 09:42:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.842 09:42:50 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:03.842 00:34:03.842 real 0m52.534s 00:34:03.842 user 1m4.738s 00:34:03.842 sys 0m19.203s 00:34:03.842 09:42:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:03.842 09:42:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:03.842 ************************************ 00:34:03.842 END TEST nvmf_abort_qd_sizes 00:34:03.842 ************************************ 00:34:03.842 09:42:50 -- common/autotest_common.sh@1142 -- # return 0 00:34:03.842 09:42:50 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:03.842 09:42:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:03.842 09:42:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:03.842 09:42:50 -- common/autotest_common.sh@10 -- # set +x 00:34:03.842 ************************************ 00:34:03.842 START TEST keyring_file 00:34:03.842 ************************************ 00:34:03.842 09:42:50 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:03.842 * Looking for test storage... 00:34:03.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:03.842 09:42:50 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:03.842 09:42:50 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:03.842 09:42:50 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:34:03.842 09:42:50 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:03.842 09:42:50 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:03.842 09:42:50 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:03.842 09:42:50 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:03.842 09:42:50 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:03.842 09:42:50 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:03.842 09:42:50 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:03.842 09:42:50 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:03.842 09:42:50 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:03.842 09:42:50 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:03.842 09:42:50 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:03.842 09:42:50 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:03.843 09:42:50 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:03.843 09:42:50 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:03.843 09:42:50 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:03.843 09:42:50 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.843 09:42:50 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.843 09:42:50 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.843 09:42:50 keyring_file -- paths/export.sh@5 -- # export PATH 00:34:03.843 09:42:50 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@47 -- # : 0 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:03.843 09:42:50 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:03.843 09:42:50 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:03.843 09:42:50 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:03.843 09:42:50 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:34:03.843 09:42:50 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:34:03.843 09:42:50 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:34:03.843 09:42:50 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:03.843 09:42:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:03.843 09:42:50 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:03.843 09:42:50 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:03.843 09:42:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:03.843 09:42:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:03.843 09:42:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FUcerW5jpM 00:34:03.843 09:42:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@705 -- # python - 00:34:03.843 09:42:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FUcerW5jpM 00:34:03.843 09:42:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FUcerW5jpM 00:34:03.843 09:42:50 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.FUcerW5jpM 00:34:03.843 09:42:50 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:34:03.843 09:42:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:03.843 09:42:50 keyring_file -- keyring/common.sh@17 -- # name=key1 00:34:03.843 09:42:50 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:03.843 09:42:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:03.843 09:42:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:03.843 09:42:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.z755MioUFw 00:34:03.843 09:42:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:34:03.843 09:42:50 keyring_file -- nvmf/common.sh@705 -- # python - 00:34:03.843 09:42:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.z755MioUFw 00:34:03.843 09:42:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.z755MioUFw 00:34:03.843 09:42:50 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.z755MioUFw 00:34:03.843 09:42:50 keyring_file -- keyring/file.sh@30 -- # tgtpid=951099 00:34:03.843 09:42:50 keyring_file -- keyring/file.sh@32 -- # waitforlisten 951099 00:34:03.843 09:42:50 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:03.843 09:42:50 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 951099 ']' 00:34:03.843 09:42:50 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:03.843 09:42:50 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:03.843 09:42:50 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:03.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:03.843 09:42:50 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:03.843 09:42:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:03.843 [2024-07-15 09:42:50.824298] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:34:03.843 [2024-07-15 09:42:50.824371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid951099 ] 00:34:03.843 EAL: No free 2048 kB hugepages reported on node 1 00:34:03.843 [2024-07-15 09:42:50.896467] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.843 [2024-07-15 09:42:50.971506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.413 09:42:51 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:04.413 09:42:51 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:34:04.413 09:42:51 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:34:04.413 09:42:51 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.413 09:42:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:04.413 [2024-07-15 09:42:51.587067] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:04.413 null0 00:34:04.674 [2024-07-15 09:42:51.619105] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:04.674 [2024-07-15 09:42:51.619341] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:04.674 [2024-07-15 09:42:51.627121] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.674 09:42:51 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:04.674 [2024-07-15 09:42:51.643165] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:34:04.674 request: 00:34:04.674 { 00:34:04.674 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:34:04.674 "secure_channel": false, 00:34:04.674 "listen_address": { 00:34:04.674 "trtype": "tcp", 00:34:04.674 "traddr": "127.0.0.1", 00:34:04.674 "trsvcid": "4420" 00:34:04.674 }, 00:34:04.674 "method": "nvmf_subsystem_add_listener", 00:34:04.674 "req_id": 1 00:34:04.674 } 00:34:04.674 Got JSON-RPC error response 00:34:04.674 response: 00:34:04.674 { 00:34:04.674 "code": -32602, 00:34:04.674 "message": "Invalid parameters" 00:34:04.674 } 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:04.674 09:42:51 keyring_file -- keyring/file.sh@46 -- # bperfpid=951281 00:34:04.674 09:42:51 keyring_file -- keyring/file.sh@48 -- # waitforlisten 951281 /var/tmp/bperf.sock 00:34:04.674 09:42:51 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 951281 ']' 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:04.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:04.674 09:42:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:04.674 [2024-07-15 09:42:51.700287] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:34:04.674 [2024-07-15 09:42:51.700333] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid951281 ] 00:34:04.674 EAL: No free 2048 kB hugepages reported on node 1 00:34:04.674 [2024-07-15 09:42:51.780681] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.674 [2024-07-15 09:42:51.844748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:05.616 09:42:52 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:05.616 09:42:52 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:34:05.616 09:42:52 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FUcerW5jpM 00:34:05.616 09:42:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FUcerW5jpM 00:34:05.616 09:42:52 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.z755MioUFw 00:34:05.616 09:42:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.z755MioUFw 00:34:05.616 09:42:52 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:34:05.616 09:42:52 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:34:05.616 09:42:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:05.616 09:42:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:05.616 09:42:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:05.877 09:42:52 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.FUcerW5jpM == \/\t\m\p\/\t\m\p\.\F\U\c\e\r\W\5\j\p\M ]] 00:34:05.877 09:42:52 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:34:05.877 09:42:52 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:34:05.877 09:42:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:05.877 09:42:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:05.877 09:42:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:06.137 09:42:53 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.z755MioUFw == \/\t\m\p\/\t\m\p\.\z\7\5\5\M\i\o\U\F\w ]] 00:34:06.137 09:42:53 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:34:06.137 09:42:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:06.137 09:42:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:06.137 09:42:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:06.137 09:42:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:06.137 09:42:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:06.137 09:42:53 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:34:06.138 09:42:53 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:34:06.138 09:42:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:06.138 09:42:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:06.138 09:42:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:06.138 09:42:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:06.138 09:42:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:06.397 09:42:53 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:34:06.397 09:42:53 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:06.397 09:42:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:06.656 [2024-07-15 09:42:53.605093] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:06.656 nvme0n1 00:34:06.656 09:42:53 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:34:06.656 09:42:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:06.656 09:42:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:06.656 09:42:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:06.656 09:42:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:06.656 09:42:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:06.916 09:42:53 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:34:06.916 09:42:53 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:34:06.916 09:42:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:06.916 09:42:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:06.916 09:42:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:06.916 09:42:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:06.916 09:42:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:06.916 09:42:54 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:34:06.916 09:42:54 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:06.916 Running I/O for 1 seconds... 00:34:08.298 00:34:08.298 Latency(us) 00:34:08.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:08.298 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:34:08.298 nvme0n1 : 1.01 13532.95 52.86 0.00 0.00 9412.04 4997.12 15728.64 00:34:08.298 =================================================================================================================== 00:34:08.299 Total : 13532.95 52.86 0.00 0.00 9412.04 4997.12 15728.64 00:34:08.299 0 00:34:08.299 09:42:55 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:08.299 09:42:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:08.299 09:42:55 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:34:08.299 09:42:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:08.299 09:42:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:08.299 09:42:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:08.299 09:42:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:08.299 09:42:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:08.299 09:42:55 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:34:08.299 09:42:55 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:34:08.299 09:42:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:08.299 09:42:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:08.299 09:42:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:08.299 09:42:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:08.299 09:42:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:08.579 09:42:55 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:34:08.580 09:42:55 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:08.580 09:42:55 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:34:08.580 09:42:55 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:08.580 09:42:55 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:34:08.580 09:42:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:08.580 09:42:55 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:34:08.580 09:42:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:08.580 09:42:55 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:08.580 09:42:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:08.840 [2024-07-15 09:42:55.782321] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:08.840 [2024-07-15 09:42:55.782514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xafb630 (107): Transport endpoint is not connected 00:34:08.840 [2024-07-15 09:42:55.783511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xafb630 (9): Bad file descriptor 00:34:08.840 [2024-07-15 09:42:55.784512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:08.840 [2024-07-15 09:42:55.784520] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:08.840 [2024-07-15 09:42:55.784525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:08.840 request: 00:34:08.840 { 00:34:08.840 "name": "nvme0", 00:34:08.840 "trtype": "tcp", 00:34:08.840 "traddr": "127.0.0.1", 00:34:08.840 "adrfam": "ipv4", 00:34:08.840 "trsvcid": "4420", 00:34:08.840 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:08.841 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:08.841 "prchk_reftag": false, 00:34:08.841 "prchk_guard": false, 00:34:08.841 "hdgst": false, 00:34:08.841 "ddgst": false, 00:34:08.841 "psk": "key1", 00:34:08.841 "method": "bdev_nvme_attach_controller", 00:34:08.841 "req_id": 1 00:34:08.841 } 00:34:08.841 Got JSON-RPC error response 00:34:08.841 response: 00:34:08.841 { 00:34:08.841 "code": -5, 00:34:08.841 "message": "Input/output error" 00:34:08.841 } 00:34:08.841 09:42:55 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:34:08.841 09:42:55 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:08.841 09:42:55 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:08.841 09:42:55 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:08.841 09:42:55 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:34:08.841 09:42:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:08.841 09:42:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:08.841 09:42:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:08.841 09:42:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:08.841 09:42:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:08.841 09:42:55 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:34:08.841 09:42:55 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:34:08.841 09:42:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:08.841 09:42:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:08.841 09:42:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:08.841 09:42:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:08.841 09:42:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:09.101 09:42:56 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:34:09.101 09:42:56 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:34:09.101 09:42:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:09.101 09:42:56 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:34:09.101 09:42:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:34:09.361 09:42:56 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:34:09.361 09:42:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:09.361 09:42:56 keyring_file -- keyring/file.sh@77 -- # jq length 00:34:09.623 09:42:56 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:34:09.623 09:42:56 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.FUcerW5jpM 00:34:09.623 09:42:56 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.FUcerW5jpM 00:34:09.623 09:42:56 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:34:09.623 09:42:56 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.FUcerW5jpM 00:34:09.623 09:42:56 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:34:09.623 09:42:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:09.623 09:42:56 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:34:09.623 09:42:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:09.623 09:42:56 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FUcerW5jpM 00:34:09.623 09:42:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FUcerW5jpM 00:34:09.623 [2024-07-15 09:42:56.736383] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.FUcerW5jpM': 0100660 00:34:09.623 [2024-07-15 09:42:56.736401] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:34:09.623 request: 00:34:09.623 { 00:34:09.623 "name": "key0", 00:34:09.623 "path": "/tmp/tmp.FUcerW5jpM", 00:34:09.623 "method": "keyring_file_add_key", 00:34:09.623 "req_id": 1 00:34:09.623 } 00:34:09.623 Got JSON-RPC error response 00:34:09.623 response: 00:34:09.623 { 00:34:09.623 "code": -1, 00:34:09.623 "message": "Operation not permitted" 00:34:09.623 } 00:34:09.623 09:42:56 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:34:09.623 09:42:56 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:09.623 09:42:56 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:09.623 09:42:56 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:09.623 09:42:56 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.FUcerW5jpM 00:34:09.623 09:42:56 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FUcerW5jpM 00:34:09.623 09:42:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FUcerW5jpM 00:34:09.885 09:42:56 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.FUcerW5jpM 00:34:09.885 09:42:56 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:34:09.885 09:42:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:09.885 09:42:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:09.885 09:42:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:09.885 09:42:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:09.885 09:42:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:09.885 09:42:57 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:34:09.885 09:42:57 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:09.885 09:42:57 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:34:09.885 09:42:57 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:09.885 09:42:57 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:34:09.885 09:42:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:09.885 09:42:57 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:34:09.885 09:42:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:09.885 09:42:57 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:09.885 09:42:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:10.146 [2024-07-15 09:42:57.221613] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.FUcerW5jpM': No such file or directory 00:34:10.146 [2024-07-15 09:42:57.221628] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:34:10.146 [2024-07-15 09:42:57.221644] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:34:10.146 [2024-07-15 09:42:57.221649] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:10.146 [2024-07-15 09:42:57.221654] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:34:10.146 request: 00:34:10.146 { 00:34:10.146 "name": "nvme0", 00:34:10.146 "trtype": "tcp", 00:34:10.146 "traddr": "127.0.0.1", 00:34:10.146 "adrfam": "ipv4", 00:34:10.146 "trsvcid": "4420", 00:34:10.146 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:10.146 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:10.146 "prchk_reftag": false, 00:34:10.146 "prchk_guard": false, 00:34:10.146 "hdgst": false, 00:34:10.146 "ddgst": false, 00:34:10.146 "psk": "key0", 00:34:10.146 "method": "bdev_nvme_attach_controller", 00:34:10.146 "req_id": 1 00:34:10.146 } 00:34:10.146 Got JSON-RPC error response 00:34:10.146 response: 00:34:10.146 { 00:34:10.146 "code": -19, 00:34:10.146 "message": "No such device" 00:34:10.146 } 00:34:10.146 09:42:57 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:34:10.146 09:42:57 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:10.146 09:42:57 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:10.146 09:42:57 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:10.146 09:42:57 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:34:10.146 09:42:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:10.408 09:42:57 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:10.408 09:42:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:10.408 09:42:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:10.408 09:42:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:10.408 09:42:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:10.408 09:42:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:10.408 09:42:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YeGqRU5imF 00:34:10.408 09:42:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:10.408 09:42:57 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:10.408 09:42:57 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:34:10.408 09:42:57 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:10.408 09:42:57 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:34:10.408 09:42:57 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:34:10.408 09:42:57 keyring_file -- nvmf/common.sh@705 -- # python - 00:34:10.408 09:42:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YeGqRU5imF 00:34:10.408 09:42:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YeGqRU5imF 00:34:10.408 09:42:57 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.YeGqRU5imF 00:34:10.408 09:42:57 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YeGqRU5imF 00:34:10.408 09:42:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YeGqRU5imF 00:34:10.408 09:42:57 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:10.408 09:42:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:10.668 nvme0n1 00:34:10.668 09:42:57 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:34:10.668 09:42:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:10.668 09:42:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:10.669 09:42:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:10.669 09:42:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:10.669 09:42:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:10.929 09:42:57 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:34:10.929 09:42:57 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:34:10.929 09:42:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:11.190 09:42:58 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:34:11.190 09:42:58 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:34:11.190 09:42:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:11.190 09:42:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:11.190 09:42:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:11.190 09:42:58 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:34:11.190 09:42:58 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:34:11.190 09:42:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:11.190 09:42:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:11.190 09:42:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:11.190 09:42:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:11.190 09:42:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:11.452 09:42:58 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:34:11.452 09:42:58 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:11.452 09:42:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:11.452 09:42:58 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:34:11.452 09:42:58 keyring_file -- keyring/file.sh@104 -- # jq length 00:34:11.452 09:42:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:11.714 09:42:58 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:34:11.714 09:42:58 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YeGqRU5imF 00:34:11.714 09:42:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YeGqRU5imF 00:34:11.975 09:42:58 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.z755MioUFw 00:34:11.975 09:42:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.z755MioUFw 00:34:11.976 09:42:59 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:11.976 09:42:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:12.236 nvme0n1 00:34:12.237 09:42:59 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:34:12.237 09:42:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:34:12.498 09:42:59 keyring_file -- keyring/file.sh@112 -- # config='{ 00:34:12.498 "subsystems": [ 00:34:12.498 { 00:34:12.498 "subsystem": "keyring", 00:34:12.498 "config": [ 00:34:12.498 { 00:34:12.498 "method": "keyring_file_add_key", 00:34:12.498 "params": { 00:34:12.498 "name": "key0", 00:34:12.498 "path": "/tmp/tmp.YeGqRU5imF" 00:34:12.498 } 00:34:12.498 }, 00:34:12.498 { 00:34:12.498 "method": "keyring_file_add_key", 00:34:12.498 "params": { 00:34:12.498 "name": "key1", 00:34:12.498 "path": "/tmp/tmp.z755MioUFw" 00:34:12.498 } 00:34:12.498 } 00:34:12.498 ] 00:34:12.498 }, 00:34:12.498 { 00:34:12.498 "subsystem": "iobuf", 00:34:12.498 "config": [ 00:34:12.498 { 00:34:12.498 "method": "iobuf_set_options", 00:34:12.498 "params": { 00:34:12.498 "small_pool_count": 8192, 00:34:12.498 "large_pool_count": 1024, 00:34:12.498 "small_bufsize": 8192, 00:34:12.498 "large_bufsize": 135168 00:34:12.498 } 00:34:12.498 } 00:34:12.498 ] 00:34:12.498 }, 00:34:12.498 { 00:34:12.498 "subsystem": "sock", 00:34:12.498 "config": [ 00:34:12.498 { 00:34:12.498 "method": "sock_set_default_impl", 00:34:12.498 "params": { 00:34:12.498 "impl_name": "posix" 00:34:12.498 } 00:34:12.498 }, 00:34:12.498 { 00:34:12.498 "method": "sock_impl_set_options", 00:34:12.498 "params": { 00:34:12.498 "impl_name": "ssl", 00:34:12.498 "recv_buf_size": 4096, 00:34:12.498 "send_buf_size": 4096, 00:34:12.498 "enable_recv_pipe": true, 00:34:12.498 "enable_quickack": false, 00:34:12.498 "enable_placement_id": 0, 00:34:12.498 "enable_zerocopy_send_server": true, 00:34:12.498 "enable_zerocopy_send_client": false, 00:34:12.498 "zerocopy_threshold": 0, 00:34:12.498 "tls_version": 0, 00:34:12.498 "enable_ktls": false 00:34:12.498 } 00:34:12.498 }, 00:34:12.498 { 00:34:12.498 "method": "sock_impl_set_options", 00:34:12.498 "params": { 00:34:12.498 "impl_name": "posix", 00:34:12.498 "recv_buf_size": 2097152, 00:34:12.498 "send_buf_size": 2097152, 00:34:12.498 "enable_recv_pipe": true, 00:34:12.498 "enable_quickack": false, 00:34:12.498 "enable_placement_id": 0, 00:34:12.498 "enable_zerocopy_send_server": true, 00:34:12.498 "enable_zerocopy_send_client": false, 00:34:12.498 "zerocopy_threshold": 0, 00:34:12.498 "tls_version": 0, 00:34:12.498 "enable_ktls": false 00:34:12.498 } 00:34:12.498 } 00:34:12.498 ] 00:34:12.498 }, 00:34:12.498 { 00:34:12.498 "subsystem": "vmd", 00:34:12.498 "config": [] 00:34:12.498 }, 00:34:12.498 { 00:34:12.498 "subsystem": "accel", 00:34:12.498 "config": [ 00:34:12.498 { 00:34:12.498 "method": "accel_set_options", 00:34:12.498 "params": { 00:34:12.498 "small_cache_size": 128, 00:34:12.498 "large_cache_size": 16, 00:34:12.498 "task_count": 2048, 00:34:12.498 "sequence_count": 2048, 00:34:12.498 "buf_count": 2048 00:34:12.498 } 00:34:12.498 } 00:34:12.498 ] 00:34:12.498 }, 00:34:12.498 { 00:34:12.498 "subsystem": "bdev", 00:34:12.498 "config": [ 00:34:12.498 { 00:34:12.498 "method": "bdev_set_options", 00:34:12.498 "params": { 00:34:12.498 "bdev_io_pool_size": 65535, 00:34:12.498 "bdev_io_cache_size": 256, 00:34:12.498 "bdev_auto_examine": true, 00:34:12.498 "iobuf_small_cache_size": 128, 00:34:12.498 "iobuf_large_cache_size": 16 00:34:12.498 } 00:34:12.498 }, 00:34:12.498 { 00:34:12.498 "method": "bdev_raid_set_options", 00:34:12.499 "params": { 00:34:12.499 "process_window_size_kb": 1024 00:34:12.499 } 00:34:12.499 }, 00:34:12.499 { 00:34:12.499 "method": "bdev_iscsi_set_options", 00:34:12.499 "params": { 00:34:12.499 "timeout_sec": 30 00:34:12.499 } 00:34:12.499 }, 00:34:12.499 { 00:34:12.499 "method": "bdev_nvme_set_options", 00:34:12.499 "params": { 00:34:12.499 "action_on_timeout": "none", 00:34:12.499 "timeout_us": 0, 00:34:12.499 "timeout_admin_us": 0, 00:34:12.499 "keep_alive_timeout_ms": 10000, 00:34:12.499 "arbitration_burst": 0, 00:34:12.499 "low_priority_weight": 0, 00:34:12.499 "medium_priority_weight": 0, 00:34:12.499 "high_priority_weight": 0, 00:34:12.499 "nvme_adminq_poll_period_us": 10000, 00:34:12.499 "nvme_ioq_poll_period_us": 0, 00:34:12.499 "io_queue_requests": 512, 00:34:12.499 "delay_cmd_submit": true, 00:34:12.499 "transport_retry_count": 4, 00:34:12.499 "bdev_retry_count": 3, 00:34:12.499 "transport_ack_timeout": 0, 00:34:12.499 "ctrlr_loss_timeout_sec": 0, 00:34:12.499 "reconnect_delay_sec": 0, 00:34:12.499 "fast_io_fail_timeout_sec": 0, 00:34:12.499 "disable_auto_failback": false, 00:34:12.499 "generate_uuids": false, 00:34:12.499 "transport_tos": 0, 00:34:12.499 "nvme_error_stat": false, 00:34:12.499 "rdma_srq_size": 0, 00:34:12.499 "io_path_stat": false, 00:34:12.499 "allow_accel_sequence": false, 00:34:12.499 "rdma_max_cq_size": 0, 00:34:12.499 "rdma_cm_event_timeout_ms": 0, 00:34:12.499 "dhchap_digests": [ 00:34:12.499 "sha256", 00:34:12.499 "sha384", 00:34:12.499 "sha512" 00:34:12.499 ], 00:34:12.499 "dhchap_dhgroups": [ 00:34:12.499 "null", 00:34:12.499 "ffdhe2048", 00:34:12.499 "ffdhe3072", 00:34:12.499 "ffdhe4096", 00:34:12.499 "ffdhe6144", 00:34:12.499 "ffdhe8192" 00:34:12.499 ] 00:34:12.499 } 00:34:12.499 }, 00:34:12.499 { 00:34:12.499 "method": "bdev_nvme_attach_controller", 00:34:12.499 "params": { 00:34:12.499 "name": "nvme0", 00:34:12.499 "trtype": "TCP", 00:34:12.499 "adrfam": "IPv4", 00:34:12.499 "traddr": "127.0.0.1", 00:34:12.499 "trsvcid": "4420", 00:34:12.499 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:12.499 "prchk_reftag": false, 00:34:12.499 "prchk_guard": false, 00:34:12.499 "ctrlr_loss_timeout_sec": 0, 00:34:12.499 "reconnect_delay_sec": 0, 00:34:12.499 "fast_io_fail_timeout_sec": 0, 00:34:12.499 "psk": "key0", 00:34:12.499 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:12.499 "hdgst": false, 00:34:12.499 "ddgst": false 00:34:12.499 } 00:34:12.499 }, 00:34:12.499 { 00:34:12.499 "method": "bdev_nvme_set_hotplug", 00:34:12.499 "params": { 00:34:12.499 "period_us": 100000, 00:34:12.499 "enable": false 00:34:12.499 } 00:34:12.499 }, 00:34:12.499 { 00:34:12.499 "method": "bdev_wait_for_examine" 00:34:12.499 } 00:34:12.499 ] 00:34:12.499 }, 00:34:12.499 { 00:34:12.499 "subsystem": "nbd", 00:34:12.499 "config": [] 00:34:12.499 } 00:34:12.499 ] 00:34:12.499 }' 00:34:12.499 09:42:59 keyring_file -- keyring/file.sh@114 -- # killprocess 951281 00:34:12.499 09:42:59 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 951281 ']' 00:34:12.499 09:42:59 keyring_file -- common/autotest_common.sh@952 -- # kill -0 951281 00:34:12.499 09:42:59 keyring_file -- common/autotest_common.sh@953 -- # uname 00:34:12.499 09:42:59 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:12.499 09:42:59 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 951281 00:34:12.499 09:42:59 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:12.499 09:42:59 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:12.499 09:42:59 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 951281' 00:34:12.499 killing process with pid 951281 00:34:12.499 09:42:59 keyring_file -- common/autotest_common.sh@967 -- # kill 951281 00:34:12.499 Received shutdown signal, test time was about 1.000000 seconds 00:34:12.499 00:34:12.499 Latency(us) 00:34:12.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:12.499 =================================================================================================================== 00:34:12.499 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:12.499 09:42:59 keyring_file -- common/autotest_common.sh@972 -- # wait 951281 00:34:12.761 09:42:59 keyring_file -- keyring/file.sh@117 -- # bperfpid=952876 00:34:12.761 09:42:59 keyring_file -- keyring/file.sh@119 -- # waitforlisten 952876 /var/tmp/bperf.sock 00:34:12.761 09:42:59 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 952876 ']' 00:34:12.761 09:42:59 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:12.761 09:42:59 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:12.761 09:42:59 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:34:12.761 09:42:59 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:12.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:12.761 09:42:59 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:12.761 09:42:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:12.761 09:42:59 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:34:12.761 "subsystems": [ 00:34:12.761 { 00:34:12.761 "subsystem": "keyring", 00:34:12.761 "config": [ 00:34:12.761 { 00:34:12.761 "method": "keyring_file_add_key", 00:34:12.761 "params": { 00:34:12.761 "name": "key0", 00:34:12.761 "path": "/tmp/tmp.YeGqRU5imF" 00:34:12.761 } 00:34:12.761 }, 00:34:12.761 { 00:34:12.761 "method": "keyring_file_add_key", 00:34:12.761 "params": { 00:34:12.761 "name": "key1", 00:34:12.761 "path": "/tmp/tmp.z755MioUFw" 00:34:12.761 } 00:34:12.761 } 00:34:12.761 ] 00:34:12.761 }, 00:34:12.761 { 00:34:12.761 "subsystem": "iobuf", 00:34:12.761 "config": [ 00:34:12.761 { 00:34:12.761 "method": "iobuf_set_options", 00:34:12.761 "params": { 00:34:12.761 "small_pool_count": 8192, 00:34:12.761 "large_pool_count": 1024, 00:34:12.761 "small_bufsize": 8192, 00:34:12.761 "large_bufsize": 135168 00:34:12.762 } 00:34:12.762 } 00:34:12.762 ] 00:34:12.762 }, 00:34:12.762 { 00:34:12.762 "subsystem": "sock", 00:34:12.762 "config": [ 00:34:12.762 { 00:34:12.762 "method": "sock_set_default_impl", 00:34:12.762 "params": { 00:34:12.762 "impl_name": "posix" 00:34:12.762 } 00:34:12.762 }, 00:34:12.762 { 00:34:12.762 "method": "sock_impl_set_options", 00:34:12.762 "params": { 00:34:12.762 "impl_name": "ssl", 00:34:12.762 "recv_buf_size": 4096, 00:34:12.762 "send_buf_size": 4096, 00:34:12.762 "enable_recv_pipe": true, 00:34:12.762 "enable_quickack": false, 00:34:12.762 "enable_placement_id": 0, 00:34:12.762 "enable_zerocopy_send_server": true, 00:34:12.762 "enable_zerocopy_send_client": false, 00:34:12.762 "zerocopy_threshold": 0, 00:34:12.762 "tls_version": 0, 00:34:12.762 "enable_ktls": false 00:34:12.762 } 00:34:12.762 }, 00:34:12.762 { 00:34:12.762 "method": "sock_impl_set_options", 00:34:12.762 "params": { 00:34:12.762 "impl_name": "posix", 00:34:12.762 "recv_buf_size": 2097152, 00:34:12.762 "send_buf_size": 2097152, 00:34:12.762 "enable_recv_pipe": true, 00:34:12.762 "enable_quickack": false, 00:34:12.762 "enable_placement_id": 0, 00:34:12.762 "enable_zerocopy_send_server": true, 00:34:12.762 "enable_zerocopy_send_client": false, 00:34:12.762 "zerocopy_threshold": 0, 00:34:12.762 "tls_version": 0, 00:34:12.762 "enable_ktls": false 00:34:12.762 } 00:34:12.762 } 00:34:12.762 ] 00:34:12.762 }, 00:34:12.762 { 00:34:12.762 "subsystem": "vmd", 00:34:12.762 "config": [] 00:34:12.762 }, 00:34:12.762 { 00:34:12.762 "subsystem": "accel", 00:34:12.762 "config": [ 00:34:12.762 { 00:34:12.762 "method": "accel_set_options", 00:34:12.762 "params": { 00:34:12.762 "small_cache_size": 128, 00:34:12.762 "large_cache_size": 16, 00:34:12.762 "task_count": 2048, 00:34:12.762 "sequence_count": 2048, 00:34:12.762 "buf_count": 2048 00:34:12.762 } 00:34:12.762 } 00:34:12.762 ] 00:34:12.762 }, 00:34:12.762 { 00:34:12.762 "subsystem": "bdev", 00:34:12.762 "config": [ 00:34:12.762 { 00:34:12.762 "method": "bdev_set_options", 00:34:12.762 "params": { 00:34:12.762 "bdev_io_pool_size": 65535, 00:34:12.762 "bdev_io_cache_size": 256, 00:34:12.762 "bdev_auto_examine": true, 00:34:12.762 "iobuf_small_cache_size": 128, 00:34:12.762 "iobuf_large_cache_size": 16 00:34:12.762 } 00:34:12.762 }, 00:34:12.762 { 00:34:12.762 "method": "bdev_raid_set_options", 00:34:12.762 "params": { 00:34:12.762 "process_window_size_kb": 1024 00:34:12.762 } 00:34:12.762 }, 00:34:12.762 { 00:34:12.762 "method": "bdev_iscsi_set_options", 00:34:12.762 "params": { 00:34:12.762 "timeout_sec": 30 00:34:12.762 } 00:34:12.762 }, 00:34:12.762 { 00:34:12.762 "method": "bdev_nvme_set_options", 00:34:12.762 "params": { 00:34:12.762 "action_on_timeout": "none", 00:34:12.762 "timeout_us": 0, 00:34:12.762 "timeout_admin_us": 0, 00:34:12.762 "keep_alive_timeout_ms": 10000, 00:34:12.762 "arbitration_burst": 0, 00:34:12.762 "low_priority_weight": 0, 00:34:12.762 "medium_priority_weight": 0, 00:34:12.762 "high_priority_weight": 0, 00:34:12.762 "nvme_adminq_poll_period_us": 10000, 00:34:12.762 "nvme_ioq_poll_period_us": 0, 00:34:12.762 "io_queue_requests": 512, 00:34:12.762 "delay_cmd_submit": true, 00:34:12.762 "transport_retry_count": 4, 00:34:12.762 "bdev_retry_count": 3, 00:34:12.762 "transport_ack_timeout": 0, 00:34:12.762 "ctrlr_loss_timeout_sec": 0, 00:34:12.762 "reconnect_delay_sec": 0, 00:34:12.762 "fast_io_fail_timeout_sec": 0, 00:34:12.762 "disable_auto_failback": false, 00:34:12.762 "generate_uuids": false, 00:34:12.762 "transport_tos": 0, 00:34:12.762 "nvme_error_stat": false, 00:34:12.762 "rdma_srq_size": 0, 00:34:12.762 "io_path_stat": false, 00:34:12.762 "allow_accel_sequence": false, 00:34:12.762 "rdma_max_cq_size": 0, 00:34:12.762 "rdma_cm_event_timeout_ms": 0, 00:34:12.762 "dhchap_digests": [ 00:34:12.762 "sha256", 00:34:12.762 "sha384", 00:34:12.762 "sha512" 00:34:12.762 ], 00:34:12.762 "dhchap_dhgroups": [ 00:34:12.762 "null", 00:34:12.762 "ffdhe2048", 00:34:12.762 "ffdhe3072", 00:34:12.762 "ffdhe4096", 00:34:12.762 "ffdhe6144", 00:34:12.762 "ffdhe8192" 00:34:12.762 ] 00:34:12.762 } 00:34:12.762 }, 00:34:12.762 { 00:34:12.762 "method": "bdev_nvme_attach_controller", 00:34:12.762 "params": { 00:34:12.762 "name": "nvme0", 00:34:12.762 "trtype": "TCP", 00:34:12.762 "adrfam": "IPv4", 00:34:12.762 "traddr": "127.0.0.1", 00:34:12.762 "trsvcid": "4420", 00:34:12.762 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:12.762 "prchk_reftag": false, 00:34:12.762 "prchk_guard": false, 00:34:12.762 "ctrlr_loss_timeout_sec": 0, 00:34:12.762 "reconnect_delay_sec": 0, 00:34:12.762 "fast_io_fail_timeout_sec": 0, 00:34:12.762 "psk": "key0", 00:34:12.762 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:12.762 "hdgst": false, 00:34:12.762 "ddgst": false 00:34:12.762 } 00:34:12.762 }, 00:34:12.762 { 00:34:12.762 "method": "bdev_nvme_set_hotplug", 00:34:12.762 "params": { 00:34:12.762 "period_us": 100000, 00:34:12.762 "enable": false 00:34:12.762 } 00:34:12.762 }, 00:34:12.762 { 00:34:12.762 "method": "bdev_wait_for_examine" 00:34:12.762 } 00:34:12.762 ] 00:34:12.762 }, 00:34:12.762 { 00:34:12.762 "subsystem": "nbd", 00:34:12.762 "config": [] 00:34:12.762 } 00:34:12.762 ] 00:34:12.762 }' 00:34:12.762 [2024-07-15 09:42:59.792873] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:34:12.762 [2024-07-15 09:42:59.792929] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952876 ] 00:34:12.762 EAL: No free 2048 kB hugepages reported on node 1 00:34:12.762 [2024-07-15 09:42:59.871945] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:12.762 [2024-07-15 09:42:59.925542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:13.023 [2024-07-15 09:43:00.070580] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:13.596 09:43:00 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:13.596 09:43:00 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:34:13.596 09:43:00 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:34:13.596 09:43:00 keyring_file -- keyring/file.sh@120 -- # jq length 00:34:13.596 09:43:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:13.596 09:43:00 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:34:13.596 09:43:00 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:34:13.596 09:43:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:13.596 09:43:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:13.596 09:43:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:13.596 09:43:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:13.596 09:43:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:13.857 09:43:00 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:34:13.857 09:43:00 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:34:13.857 09:43:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:13.857 09:43:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:13.857 09:43:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:13.857 09:43:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:13.857 09:43:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:13.857 09:43:01 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:34:13.857 09:43:01 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:34:13.857 09:43:01 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:34:13.857 09:43:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:34:14.119 09:43:01 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:34:14.119 09:43:01 keyring_file -- keyring/file.sh@1 -- # cleanup 00:34:14.119 09:43:01 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.YeGqRU5imF /tmp/tmp.z755MioUFw 00:34:14.119 09:43:01 keyring_file -- keyring/file.sh@20 -- # killprocess 952876 00:34:14.119 09:43:01 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 952876 ']' 00:34:14.119 09:43:01 keyring_file -- common/autotest_common.sh@952 -- # kill -0 952876 00:34:14.119 09:43:01 keyring_file -- common/autotest_common.sh@953 -- # uname 00:34:14.119 09:43:01 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:14.119 09:43:01 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 952876 00:34:14.119 09:43:01 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:14.119 09:43:01 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:14.119 09:43:01 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 952876' 00:34:14.119 killing process with pid 952876 00:34:14.119 09:43:01 keyring_file -- common/autotest_common.sh@967 -- # kill 952876 00:34:14.119 Received shutdown signal, test time was about 1.000000 seconds 00:34:14.119 00:34:14.119 Latency(us) 00:34:14.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:14.119 =================================================================================================================== 00:34:14.119 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:14.119 09:43:01 keyring_file -- common/autotest_common.sh@972 -- # wait 952876 00:34:14.380 09:43:01 keyring_file -- keyring/file.sh@21 -- # killprocess 951099 00:34:14.380 09:43:01 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 951099 ']' 00:34:14.380 09:43:01 keyring_file -- common/autotest_common.sh@952 -- # kill -0 951099 00:34:14.380 09:43:01 keyring_file -- common/autotest_common.sh@953 -- # uname 00:34:14.380 09:43:01 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:14.380 09:43:01 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 951099 00:34:14.380 09:43:01 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:14.380 09:43:01 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:14.380 09:43:01 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 951099' 00:34:14.380 killing process with pid 951099 00:34:14.380 09:43:01 keyring_file -- common/autotest_common.sh@967 -- # kill 951099 00:34:14.380 [2024-07-15 09:43:01.412205] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:34:14.380 09:43:01 keyring_file -- common/autotest_common.sh@972 -- # wait 951099 00:34:14.642 00:34:14.642 real 0m11.112s 00:34:14.642 user 0m26.445s 00:34:14.642 sys 0m2.614s 00:34:14.642 09:43:01 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:14.642 09:43:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:14.642 ************************************ 00:34:14.642 END TEST keyring_file 00:34:14.642 ************************************ 00:34:14.642 09:43:01 -- common/autotest_common.sh@1142 -- # return 0 00:34:14.642 09:43:01 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:34:14.642 09:43:01 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:14.642 09:43:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:14.642 09:43:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:14.642 09:43:01 -- common/autotest_common.sh@10 -- # set +x 00:34:14.642 ************************************ 00:34:14.642 START TEST keyring_linux 00:34:14.642 ************************************ 00:34:14.642 09:43:01 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:14.642 * Looking for test storage... 00:34:14.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:14.642 09:43:01 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:14.642 09:43:01 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.642 09:43:01 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:34:14.642 09:43:01 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.642 09:43:01 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.642 09:43:01 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.642 09:43:01 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:14.643 09:43:01 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.643 09:43:01 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.643 09:43:01 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.643 09:43:01 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.643 09:43:01 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.643 09:43:01 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.643 09:43:01 keyring_linux -- paths/export.sh@5 -- # export PATH 00:34:14.643 09:43:01 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:14.643 09:43:01 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:14.643 09:43:01 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:14.643 09:43:01 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:14.643 09:43:01 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:34:14.643 09:43:01 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:34:14.643 09:43:01 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:34:14.643 09:43:01 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:34:14.643 09:43:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:14.643 09:43:01 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:34:14.643 09:43:01 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:14.643 09:43:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:14.643 09:43:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:34:14.643 09:43:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:34:14.643 09:43:01 keyring_linux -- nvmf/common.sh@705 -- # python - 00:34:14.905 09:43:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:34:14.905 09:43:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:34:14.905 /tmp/:spdk-test:key0 00:34:14.905 09:43:01 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:34:14.905 09:43:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:14.905 09:43:01 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:34:14.905 09:43:01 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:14.905 09:43:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:14.905 09:43:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:34:14.905 09:43:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:14.905 09:43:01 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:14.905 09:43:01 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:34:14.905 09:43:01 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:14.905 09:43:01 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:34:14.905 09:43:01 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:34:14.905 09:43:01 keyring_linux -- nvmf/common.sh@705 -- # python - 00:34:14.905 09:43:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:34:14.905 09:43:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:34:14.905 /tmp/:spdk-test:key1 00:34:14.905 09:43:01 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=953470 00:34:14.905 09:43:01 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 953470 00:34:14.905 09:43:01 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:14.905 09:43:01 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 953470 ']' 00:34:14.905 09:43:01 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.905 09:43:01 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:14.905 09:43:01 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.905 09:43:01 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:14.905 09:43:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:14.905 [2024-07-15 09:43:01.986939] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:34:14.905 [2024-07-15 09:43:01.987015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953470 ] 00:34:14.905 EAL: No free 2048 kB hugepages reported on node 1 00:34:14.905 [2024-07-15 09:43:02.057958] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.168 [2024-07-15 09:43:02.132107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.741 09:43:02 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:15.741 09:43:02 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:34:15.741 09:43:02 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:34:15.741 09:43:02 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.741 09:43:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:15.741 [2024-07-15 09:43:02.748051] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:15.741 null0 00:34:15.741 [2024-07-15 09:43:02.780091] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:15.741 [2024-07-15 09:43:02.780471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:15.741 09:43:02 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.741 09:43:02 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:34:15.741 746332764 00:34:15.741 09:43:02 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:34:15.741 630537149 00:34:15.741 09:43:02 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=953528 00:34:15.741 09:43:02 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 953528 /var/tmp/bperf.sock 00:34:15.741 09:43:02 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:34:15.741 09:43:02 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 953528 ']' 00:34:15.741 09:43:02 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:15.741 09:43:02 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:15.741 09:43:02 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:15.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:15.741 09:43:02 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:15.741 09:43:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:15.741 [2024-07-15 09:43:02.857345] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:34:15.741 [2024-07-15 09:43:02.857393] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953528 ] 00:34:15.741 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.741 [2024-07-15 09:43:02.936584] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.003 [2024-07-15 09:43:02.990112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:16.574 09:43:03 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:16.574 09:43:03 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:34:16.574 09:43:03 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:34:16.574 09:43:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:34:16.834 09:43:03 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:34:16.834 09:43:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:16.834 09:43:03 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:16.834 09:43:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:17.092 [2024-07-15 09:43:04.136315] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:17.092 nvme0n1 00:34:17.092 09:43:04 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:34:17.092 09:43:04 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:34:17.092 09:43:04 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:17.092 09:43:04 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:17.092 09:43:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:17.092 09:43:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:17.351 09:43:04 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:34:17.351 09:43:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:17.351 09:43:04 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:34:17.351 09:43:04 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:34:17.352 09:43:04 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:17.352 09:43:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:17.352 09:43:04 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:34:17.611 09:43:04 keyring_linux -- keyring/linux.sh@25 -- # sn=746332764 00:34:17.611 09:43:04 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:34:17.611 09:43:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:17.611 09:43:04 keyring_linux -- keyring/linux.sh@26 -- # [[ 746332764 == \7\4\6\3\3\2\7\6\4 ]] 00:34:17.611 09:43:04 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 746332764 00:34:17.611 09:43:04 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:34:17.611 09:43:04 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:17.611 Running I/O for 1 seconds... 00:34:18.580 00:34:18.580 Latency(us) 00:34:18.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.580 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:18.580 nvme0n1 : 1.01 14094.70 55.06 0.00 0.00 9040.43 3904.85 11741.87 00:34:18.580 =================================================================================================================== 00:34:18.580 Total : 14094.70 55.06 0.00 0.00 9040.43 3904.85 11741.87 00:34:18.580 0 00:34:18.580 09:43:05 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:18.580 09:43:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:18.842 09:43:05 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:34:18.842 09:43:05 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:34:18.842 09:43:05 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:18.842 09:43:05 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:18.842 09:43:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:18.842 09:43:05 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:18.842 09:43:05 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:34:18.842 09:43:05 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:18.842 09:43:05 keyring_linux -- keyring/linux.sh@23 -- # return 00:34:18.842 09:43:05 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:18.842 09:43:05 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:34:18.842 09:43:05 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:18.842 09:43:05 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:34:18.842 09:43:05 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:18.842 09:43:05 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:34:18.842 09:43:05 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:18.842 09:43:05 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:18.842 09:43:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:19.104 [2024-07-15 09:43:06.130425] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:19.104 [2024-07-15 09:43:06.130991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9c000 (107): Transport endpoint is not connected 00:34:19.104 [2024-07-15 09:43:06.131988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9c000 (9): Bad file descriptor 00:34:19.104 [2024-07-15 09:43:06.132996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:19.104 [2024-07-15 09:43:06.133003] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:19.104 [2024-07-15 09:43:06.133009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:19.104 request: 00:34:19.104 { 00:34:19.104 "name": "nvme0", 00:34:19.104 "trtype": "tcp", 00:34:19.104 "traddr": "127.0.0.1", 00:34:19.104 "adrfam": "ipv4", 00:34:19.104 "trsvcid": "4420", 00:34:19.104 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:19.104 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:19.104 "prchk_reftag": false, 00:34:19.104 "prchk_guard": false, 00:34:19.104 "hdgst": false, 00:34:19.104 "ddgst": false, 00:34:19.104 "psk": ":spdk-test:key1", 00:34:19.104 "method": "bdev_nvme_attach_controller", 00:34:19.104 "req_id": 1 00:34:19.104 } 00:34:19.104 Got JSON-RPC error response 00:34:19.104 response: 00:34:19.104 { 00:34:19.104 "code": -5, 00:34:19.104 "message": "Input/output error" 00:34:19.104 } 00:34:19.104 09:43:06 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:34:19.104 09:43:06 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:19.104 09:43:06 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:19.104 09:43:06 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:19.104 09:43:06 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:34:19.104 09:43:06 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:19.104 09:43:06 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:34:19.104 09:43:06 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:34:19.104 09:43:06 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:34:19.104 09:43:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:19.104 09:43:06 keyring_linux -- keyring/linux.sh@33 -- # sn=746332764 00:34:19.104 09:43:06 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 746332764 00:34:19.104 1 links removed 00:34:19.104 09:43:06 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:19.104 09:43:06 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:34:19.104 09:43:06 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:34:19.104 09:43:06 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:34:19.104 09:43:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:34:19.104 09:43:06 keyring_linux -- keyring/linux.sh@33 -- # sn=630537149 00:34:19.104 09:43:06 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 630537149 00:34:19.104 1 links removed 00:34:19.104 09:43:06 keyring_linux -- keyring/linux.sh@41 -- # killprocess 953528 00:34:19.104 09:43:06 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 953528 ']' 00:34:19.104 09:43:06 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 953528 00:34:19.104 09:43:06 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:34:19.104 09:43:06 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:19.104 09:43:06 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 953528 00:34:19.104 09:43:06 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:19.104 09:43:06 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:19.104 09:43:06 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 953528' 00:34:19.104 killing process with pid 953528 00:34:19.104 09:43:06 keyring_linux -- common/autotest_common.sh@967 -- # kill 953528 00:34:19.104 Received shutdown signal, test time was about 1.000000 seconds 00:34:19.104 00:34:19.104 Latency(us) 00:34:19.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:19.104 =================================================================================================================== 00:34:19.104 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:19.104 09:43:06 keyring_linux -- common/autotest_common.sh@972 -- # wait 953528 00:34:19.366 09:43:06 keyring_linux -- keyring/linux.sh@42 -- # killprocess 953470 00:34:19.366 09:43:06 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 953470 ']' 00:34:19.366 09:43:06 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 953470 00:34:19.366 09:43:06 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:34:19.366 09:43:06 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:19.366 09:43:06 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 953470 00:34:19.366 09:43:06 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:19.366 09:43:06 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:19.366 09:43:06 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 953470' 00:34:19.366 killing process with pid 953470 00:34:19.366 09:43:06 keyring_linux -- common/autotest_common.sh@967 -- # kill 953470 00:34:19.366 09:43:06 keyring_linux -- common/autotest_common.sh@972 -- # wait 953470 00:34:19.627 00:34:19.627 real 0m4.895s 00:34:19.627 user 0m8.776s 00:34:19.627 sys 0m1.403s 00:34:19.627 09:43:06 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:19.627 09:43:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:19.627 ************************************ 00:34:19.627 END TEST keyring_linux 00:34:19.627 ************************************ 00:34:19.627 09:43:06 -- common/autotest_common.sh@1142 -- # return 0 00:34:19.627 09:43:06 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:34:19.627 09:43:06 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:34:19.627 09:43:06 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:34:19.627 09:43:06 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:34:19.627 09:43:06 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:34:19.627 09:43:06 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:34:19.627 09:43:06 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:34:19.627 09:43:06 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:34:19.627 09:43:06 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:34:19.627 09:43:06 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:34:19.627 09:43:06 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:34:19.627 09:43:06 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:34:19.627 09:43:06 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:34:19.627 09:43:06 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:34:19.627 09:43:06 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:34:19.627 09:43:06 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:34:19.627 09:43:06 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:34:19.627 09:43:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:19.627 09:43:06 -- common/autotest_common.sh@10 -- # set +x 00:34:19.627 09:43:06 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:34:19.628 09:43:06 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:34:19.628 09:43:06 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:34:19.628 09:43:06 -- common/autotest_common.sh@10 -- # set +x 00:34:27.777 INFO: APP EXITING 00:34:27.777 INFO: killing all VMs 00:34:27.777 INFO: killing vhost app 00:34:27.777 INFO: EXIT DONE 00:34:31.152 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:34:31.152 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:34:31.152 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:34:31.152 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:34:31.152 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:34:31.152 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:34:31.152 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:34:31.152 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:34:31.152 0000:65:00.0 (144d a80a): Already using the nvme driver 00:34:31.152 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:34:31.152 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:34:31.152 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:34:31.152 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:34:31.152 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:34:31.152 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:34:31.152 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:34:31.152 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:34:35.359 Cleaning 00:34:35.359 Removing: /var/run/dpdk/spdk0/config 00:34:35.359 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:35.359 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:35.359 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:35.359 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:35.359 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:35.359 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:35.359 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:35.359 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:35.359 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:35.359 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:35.359 Removing: /var/run/dpdk/spdk1/config 00:34:35.359 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:35.359 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:35.359 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:35.359 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:35.359 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:35.359 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:35.359 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:35.359 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:35.359 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:35.359 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:35.359 Removing: /var/run/dpdk/spdk1/mp_socket 00:34:35.359 Removing: /var/run/dpdk/spdk2/config 00:34:35.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:35.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:35.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:35.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:35.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:35.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:35.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:35.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:35.359 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:35.359 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:35.359 Removing: /var/run/dpdk/spdk3/config 00:34:35.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:35.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:35.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:35.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:35.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:35.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:35.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:35.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:35.359 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:35.359 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:35.359 Removing: /var/run/dpdk/spdk4/config 00:34:35.359 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:35.359 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:35.359 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:35.359 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:35.359 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:35.359 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:35.359 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:35.359 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:35.359 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:35.359 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:35.359 Removing: /dev/shm/bdev_svc_trace.1 00:34:35.359 Removing: /dev/shm/nvmf_trace.0 00:34:35.359 Removing: /dev/shm/spdk_tgt_trace.pid468677 00:34:35.359 Removing: /var/run/dpdk/spdk0 00:34:35.359 Removing: /var/run/dpdk/spdk1 00:34:35.359 Removing: /var/run/dpdk/spdk2 00:34:35.359 Removing: /var/run/dpdk/spdk3 00:34:35.359 Removing: /var/run/dpdk/spdk4 00:34:35.359 Removing: /var/run/dpdk/spdk_pid467109 00:34:35.359 Removing: /var/run/dpdk/spdk_pid468677 00:34:35.359 Removing: /var/run/dpdk/spdk_pid469307 00:34:35.359 Removing: /var/run/dpdk/spdk_pid470736 00:34:35.359 Removing: /var/run/dpdk/spdk_pid471038 00:34:35.359 Removing: /var/run/dpdk/spdk_pid472412 00:34:35.359 Removing: /var/run/dpdk/spdk_pid472463 00:34:35.359 Removing: /var/run/dpdk/spdk_pid472886 00:34:35.359 Removing: /var/run/dpdk/spdk_pid473786 00:34:35.359 Removing: /var/run/dpdk/spdk_pid474486 00:34:35.359 Removing: /var/run/dpdk/spdk_pid474875 00:34:35.359 Removing: /var/run/dpdk/spdk_pid475225 00:34:35.359 Removing: /var/run/dpdk/spdk_pid475518 00:34:35.359 Removing: /var/run/dpdk/spdk_pid475765 00:34:35.359 Removing: /var/run/dpdk/spdk_pid476098 00:34:35.359 Removing: /var/run/dpdk/spdk_pid476446 00:34:35.359 Removing: /var/run/dpdk/spdk_pid476831 00:34:35.359 Removing: /var/run/dpdk/spdk_pid477895 00:34:35.359 Removing: /var/run/dpdk/spdk_pid481311 00:34:35.359 Removing: /var/run/dpdk/spdk_pid481623 00:34:35.359 Removing: /var/run/dpdk/spdk_pid482010 00:34:35.359 Removing: /var/run/dpdk/spdk_pid482215 00:34:35.359 Removing: /var/run/dpdk/spdk_pid482590 00:34:35.359 Removing: /var/run/dpdk/spdk_pid482809 00:34:35.359 Removing: /var/run/dpdk/spdk_pid483287 00:34:35.359 Removing: /var/run/dpdk/spdk_pid483305 00:34:35.359 Removing: /var/run/dpdk/spdk_pid483670 00:34:35.359 Removing: /var/run/dpdk/spdk_pid483911 00:34:35.359 Removing: /var/run/dpdk/spdk_pid484040 00:34:35.359 Removing: /var/run/dpdk/spdk_pid484371 00:34:35.359 Removing: /var/run/dpdk/spdk_pid484815 00:34:35.359 Removing: /var/run/dpdk/spdk_pid485119 00:34:35.359 Removing: /var/run/dpdk/spdk_pid485381 00:34:35.359 Removing: /var/run/dpdk/spdk_pid485611 00:34:35.359 Removing: /var/run/dpdk/spdk_pid485748 00:34:35.359 Removing: /var/run/dpdk/spdk_pid486020 00:34:35.359 Removing: /var/run/dpdk/spdk_pid486277 00:34:35.359 Removing: /var/run/dpdk/spdk_pid486476 00:34:35.359 Removing: /var/run/dpdk/spdk_pid486759 00:34:35.359 Removing: /var/run/dpdk/spdk_pid487106 00:34:35.359 Removing: /var/run/dpdk/spdk_pid487463 00:34:35.359 Removing: /var/run/dpdk/spdk_pid487752 00:34:35.359 Removing: /var/run/dpdk/spdk_pid487946 00:34:35.359 Removing: /var/run/dpdk/spdk_pid488199 00:34:35.359 Removing: /var/run/dpdk/spdk_pid488551 00:34:35.359 Removing: /var/run/dpdk/spdk_pid488905 00:34:35.359 Removing: /var/run/dpdk/spdk_pid489257 00:34:35.359 Removing: /var/run/dpdk/spdk_pid489457 00:34:35.359 Removing: /var/run/dpdk/spdk_pid489664 00:34:35.359 Removing: /var/run/dpdk/spdk_pid489997 00:34:35.359 Removing: /var/run/dpdk/spdk_pid490347 00:34:35.359 Removing: /var/run/dpdk/spdk_pid490700 00:34:35.359 Removing: /var/run/dpdk/spdk_pid490931 00:34:35.359 Removing: /var/run/dpdk/spdk_pid491128 00:34:35.359 Removing: /var/run/dpdk/spdk_pid491445 00:34:35.359 Removing: /var/run/dpdk/spdk_pid491799 00:34:35.359 Removing: /var/run/dpdk/spdk_pid491898 00:34:35.359 Removing: /var/run/dpdk/spdk_pid492273 00:34:35.359 Removing: /var/run/dpdk/spdk_pid497357 00:34:35.359 Removing: /var/run/dpdk/spdk_pid555648 00:34:35.359 Removing: /var/run/dpdk/spdk_pid561113 00:34:35.359 Removing: /var/run/dpdk/spdk_pid573598 00:34:35.359 Removing: /var/run/dpdk/spdk_pid581119 00:34:35.359 Removing: /var/run/dpdk/spdk_pid586256 00:34:35.359 Removing: /var/run/dpdk/spdk_pid587134 00:34:35.359 Removing: /var/run/dpdk/spdk_pid594694 00:34:35.359 Removing: /var/run/dpdk/spdk_pid602547 00:34:35.359 Removing: /var/run/dpdk/spdk_pid602574 00:34:35.359 Removing: /var/run/dpdk/spdk_pid603577 00:34:35.359 Removing: /var/run/dpdk/spdk_pid604582 00:34:35.359 Removing: /var/run/dpdk/spdk_pid605588 00:34:35.359 Removing: /var/run/dpdk/spdk_pid606261 00:34:35.359 Removing: /var/run/dpdk/spdk_pid606271 00:34:35.359 Removing: /var/run/dpdk/spdk_pid606601 00:34:35.359 Removing: /var/run/dpdk/spdk_pid606616 00:34:35.359 Removing: /var/run/dpdk/spdk_pid606642 00:34:35.359 Removing: /var/run/dpdk/spdk_pid607708 00:34:35.359 Removing: /var/run/dpdk/spdk_pid608734 00:34:35.359 Removing: /var/run/dpdk/spdk_pid609821 00:34:35.359 Removing: /var/run/dpdk/spdk_pid610471 00:34:35.359 Removing: /var/run/dpdk/spdk_pid610603 00:34:35.359 Removing: /var/run/dpdk/spdk_pid610860 00:34:35.359 Removing: /var/run/dpdk/spdk_pid612108 00:34:35.359 Removing: /var/run/dpdk/spdk_pid613488 00:34:35.359 Removing: /var/run/dpdk/spdk_pid624285 00:34:35.359 Removing: /var/run/dpdk/spdk_pid624640 00:34:35.359 Removing: /var/run/dpdk/spdk_pid630487 00:34:35.359 Removing: /var/run/dpdk/spdk_pid637891 00:34:35.360 Removing: /var/run/dpdk/spdk_pid640971 00:34:35.360 Removing: /var/run/dpdk/spdk_pid654155 00:34:35.360 Removing: /var/run/dpdk/spdk_pid665876 00:34:35.360 Removing: /var/run/dpdk/spdk_pid667888 00:34:35.360 Removing: /var/run/dpdk/spdk_pid668941 00:34:35.360 Removing: /var/run/dpdk/spdk_pid691446 00:34:35.360 Removing: /var/run/dpdk/spdk_pid696647 00:34:35.360 Removing: /var/run/dpdk/spdk_pid727577 00:34:35.360 Removing: /var/run/dpdk/spdk_pid733484 00:34:35.360 Removing: /var/run/dpdk/spdk_pid735362 00:34:35.360 Removing: /var/run/dpdk/spdk_pid737628 00:34:35.360 Removing: /var/run/dpdk/spdk_pid737916 00:34:35.360 Removing: /var/run/dpdk/spdk_pid738001 00:34:35.360 Removing: /var/run/dpdk/spdk_pid738325 00:34:35.360 Removing: /var/run/dpdk/spdk_pid738829 00:34:35.360 Removing: /var/run/dpdk/spdk_pid741054 00:34:35.360 Removing: /var/run/dpdk/spdk_pid742130 00:34:35.360 Removing: /var/run/dpdk/spdk_pid742508 00:34:35.360 Removing: /var/run/dpdk/spdk_pid745209 00:34:35.360 Removing: /var/run/dpdk/spdk_pid745914 00:34:35.360 Removing: /var/run/dpdk/spdk_pid746632 00:34:35.360 Removing: /var/run/dpdk/spdk_pid752061 00:34:35.620 Removing: /var/run/dpdk/spdk_pid765128 00:34:35.620 Removing: /var/run/dpdk/spdk_pid769843 00:34:35.620 Removing: /var/run/dpdk/spdk_pid778289 00:34:35.620 Removing: /var/run/dpdk/spdk_pid779783 00:34:35.620 Removing: /var/run/dpdk/spdk_pid781548 00:34:35.620 Removing: /var/run/dpdk/spdk_pid787164 00:34:35.620 Removing: /var/run/dpdk/spdk_pid792596 00:34:35.620 Removing: /var/run/dpdk/spdk_pid802560 00:34:35.620 Removing: /var/run/dpdk/spdk_pid802568 00:34:35.620 Removing: /var/run/dpdk/spdk_pid808131 00:34:35.620 Removing: /var/run/dpdk/spdk_pid808295 00:34:35.620 Removing: /var/run/dpdk/spdk_pid808632 00:34:35.620 Removing: /var/run/dpdk/spdk_pid809191 00:34:35.620 Removing: /var/run/dpdk/spdk_pid809300 00:34:35.620 Removing: /var/run/dpdk/spdk_pid815157 00:34:35.620 Removing: /var/run/dpdk/spdk_pid815854 00:34:35.620 Removing: /var/run/dpdk/spdk_pid821700 00:34:35.620 Removing: /var/run/dpdk/spdk_pid824825 00:34:35.620 Removing: /var/run/dpdk/spdk_pid831791 00:34:35.620 Removing: /var/run/dpdk/spdk_pid839226 00:34:35.620 Removing: /var/run/dpdk/spdk_pid849510 00:34:35.620 Removing: /var/run/dpdk/spdk_pid858589 00:34:35.620 Removing: /var/run/dpdk/spdk_pid858591 00:34:35.620 Removing: /var/run/dpdk/spdk_pid882838 00:34:35.620 Removing: /var/run/dpdk/spdk_pid883575 00:34:35.620 Removing: /var/run/dpdk/spdk_pid884261 00:34:35.620 Removing: /var/run/dpdk/spdk_pid884947 00:34:35.620 Removing: /var/run/dpdk/spdk_pid886049 00:34:35.620 Removing: /var/run/dpdk/spdk_pid886792 00:34:35.620 Removing: /var/run/dpdk/spdk_pid887469 00:34:35.620 Removing: /var/run/dpdk/spdk_pid888607 00:34:35.620 Removing: /var/run/dpdk/spdk_pid894326 00:34:35.620 Removing: /var/run/dpdk/spdk_pid894610 00:34:35.621 Removing: /var/run/dpdk/spdk_pid902065 00:34:35.621 Removing: /var/run/dpdk/spdk_pid902433 00:34:35.621 Removing: /var/run/dpdk/spdk_pid905001 00:34:35.621 Removing: /var/run/dpdk/spdk_pid912735 00:34:35.621 Removing: /var/run/dpdk/spdk_pid912762 00:34:35.621 Removing: /var/run/dpdk/spdk_pid919401 00:34:35.621 Removing: /var/run/dpdk/spdk_pid921817 00:34:35.621 Removing: /var/run/dpdk/spdk_pid924112 00:34:35.621 Removing: /var/run/dpdk/spdk_pid925584 00:34:35.621 Removing: /var/run/dpdk/spdk_pid927863 00:34:35.621 Removing: /var/run/dpdk/spdk_pid929344 00:34:35.621 Removing: /var/run/dpdk/spdk_pid940649 00:34:35.621 Removing: /var/run/dpdk/spdk_pid941140 00:34:35.621 Removing: /var/run/dpdk/spdk_pid941770 00:34:35.621 Removing: /var/run/dpdk/spdk_pid944832 00:34:35.621 Removing: /var/run/dpdk/spdk_pid945499 00:34:35.621 Removing: /var/run/dpdk/spdk_pid945991 00:34:35.621 Removing: /var/run/dpdk/spdk_pid951099 00:34:35.621 Removing: /var/run/dpdk/spdk_pid951281 00:34:35.621 Removing: /var/run/dpdk/spdk_pid952876 00:34:35.621 Removing: /var/run/dpdk/spdk_pid953470 00:34:35.621 Removing: /var/run/dpdk/spdk_pid953528 00:34:35.621 Clean 00:34:35.881 09:43:22 -- common/autotest_common.sh@1451 -- # return 0 00:34:35.881 09:43:22 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:34:35.881 09:43:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:35.881 09:43:22 -- common/autotest_common.sh@10 -- # set +x 00:34:35.881 09:43:22 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:34:35.881 09:43:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:35.881 09:43:22 -- common/autotest_common.sh@10 -- # set +x 00:34:35.881 09:43:22 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:35.881 09:43:22 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:35.881 09:43:22 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:35.881 09:43:22 -- spdk/autotest.sh@391 -- # hash lcov 00:34:35.881 09:43:22 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:35.881 09:43:22 -- spdk/autotest.sh@393 -- # hostname 00:34:35.881 09:43:22 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:36.142 geninfo: WARNING: invalid characters removed from testname! 00:35:02.729 09:43:47 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:02.990 09:43:50 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:05.538 09:43:52 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:07.454 09:43:54 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:08.841 09:43:55 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:10.758 09:43:57 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:12.143 09:43:59 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:12.143 09:43:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:12.143 09:43:59 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:12.143 09:43:59 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:12.143 09:43:59 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:12.143 09:43:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.143 09:43:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.143 09:43:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.143 09:43:59 -- paths/export.sh@5 -- $ export PATH 00:35:12.143 09:43:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.143 09:43:59 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:35:12.143 09:43:59 -- common/autobuild_common.sh@444 -- $ date +%s 00:35:12.143 09:43:59 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721029439.XXXXXX 00:35:12.143 09:43:59 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721029439.DwlL44 00:35:12.143 09:43:59 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:35:12.143 09:43:59 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:35:12.143 09:43:59 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:35:12.143 09:43:59 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:35:12.143 09:43:59 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:35:12.143 09:43:59 -- common/autobuild_common.sh@460 -- $ get_config_params 00:35:12.143 09:43:59 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:35:12.143 09:43:59 -- common/autotest_common.sh@10 -- $ set +x 00:35:12.143 09:43:59 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:35:12.143 09:43:59 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:35:12.143 09:43:59 -- pm/common@17 -- $ local monitor 00:35:12.143 09:43:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:12.143 09:43:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:12.143 09:43:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:12.143 09:43:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:12.143 09:43:59 -- pm/common@21 -- $ date +%s 00:35:12.143 09:43:59 -- pm/common@25 -- $ sleep 1 00:35:12.143 09:43:59 -- pm/common@21 -- $ date +%s 00:35:12.143 09:43:59 -- pm/common@21 -- $ date +%s 00:35:12.143 09:43:59 -- pm/common@21 -- $ date +%s 00:35:12.143 09:43:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721029439 00:35:12.143 09:43:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721029439 00:35:12.143 09:43:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721029439 00:35:12.143 09:43:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721029439 00:35:12.143 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721029439_collect-vmstat.pm.log 00:35:12.143 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721029439_collect-cpu-load.pm.log 00:35:12.143 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721029439_collect-cpu-temp.pm.log 00:35:12.143 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721029439_collect-bmc-pm.bmc.pm.log 00:35:13.089 09:44:00 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:35:13.089 09:44:00 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:35:13.089 09:44:00 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:13.089 09:44:00 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:35:13.089 09:44:00 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:35:13.089 09:44:00 -- spdk/autopackage.sh@19 -- $ timing_finish 00:35:13.089 09:44:00 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:13.089 09:44:00 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:35:13.089 09:44:00 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:13.351 09:44:00 -- spdk/autopackage.sh@20 -- $ exit 0 00:35:13.351 09:44:00 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:35:13.351 09:44:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:35:13.351 09:44:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:35:13.351 09:44:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:13.351 09:44:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:35:13.351 09:44:00 -- pm/common@44 -- $ pid=966297 00:35:13.351 09:44:00 -- pm/common@50 -- $ kill -TERM 966297 00:35:13.351 09:44:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:13.351 09:44:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:35:13.351 09:44:00 -- pm/common@44 -- $ pid=966298 00:35:13.351 09:44:00 -- pm/common@50 -- $ kill -TERM 966298 00:35:13.351 09:44:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:13.351 09:44:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:35:13.351 09:44:00 -- pm/common@44 -- $ pid=966300 00:35:13.351 09:44:00 -- pm/common@50 -- $ kill -TERM 966300 00:35:13.351 09:44:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:13.351 09:44:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:35:13.351 09:44:00 -- pm/common@44 -- $ pid=966319 00:35:13.351 09:44:00 -- pm/common@50 -- $ sudo -E kill -TERM 966319 00:35:13.351 + [[ -n 342652 ]] 00:35:13.351 + sudo kill 342652 00:35:13.363 [Pipeline] } 00:35:13.388 [Pipeline] // stage 00:35:13.395 [Pipeline] } 00:35:13.414 [Pipeline] // timeout 00:35:13.419 [Pipeline] } 00:35:13.439 [Pipeline] // catchError 00:35:13.445 [Pipeline] } 00:35:13.467 [Pipeline] // wrap 00:35:13.475 [Pipeline] } 00:35:13.494 [Pipeline] // catchError 00:35:13.506 [Pipeline] stage 00:35:13.509 [Pipeline] { (Epilogue) 00:35:13.528 [Pipeline] catchError 00:35:13.530 [Pipeline] { 00:35:13.545 [Pipeline] echo 00:35:13.547 Cleanup processes 00:35:13.553 [Pipeline] sh 00:35:13.844 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:13.844 966402 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:35:13.844 966846 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:13.862 [Pipeline] sh 00:35:14.152 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:14.152 ++ grep -v 'sudo pgrep' 00:35:14.152 ++ awk '{print $1}' 00:35:14.152 + sudo kill -9 966402 00:35:14.167 [Pipeline] sh 00:35:14.458 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:26.756 [Pipeline] sh 00:35:27.048 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:27.048 Artifacts sizes are good 00:35:27.063 [Pipeline] archiveArtifacts 00:35:27.070 Archiving artifacts 00:35:27.258 [Pipeline] sh 00:35:27.543 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:27.560 [Pipeline] cleanWs 00:35:27.570 [WS-CLEANUP] Deleting project workspace... 00:35:27.570 [WS-CLEANUP] Deferred wipeout is used... 00:35:27.576 [WS-CLEANUP] done 00:35:27.579 [Pipeline] } 00:35:27.600 [Pipeline] // catchError 00:35:27.609 [Pipeline] sh 00:35:27.892 + logger -p user.info -t JENKINS-CI 00:35:27.901 [Pipeline] } 00:35:27.918 [Pipeline] // stage 00:35:27.923 [Pipeline] } 00:35:27.940 [Pipeline] // node 00:35:27.945 [Pipeline] End of Pipeline 00:35:28.079 Finished: SUCCESS